Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Unable to use S3 file backend with 2016.3.1 on Ubuntu 14.04 or 16.04 #34074

Closed
fooka03 opened this issue Jun 16, 2016 · 16 comments
Closed

Unable to use S3 file backend with 2016.3.1 on Ubuntu 14.04 or 16.04 #34074

fooka03 opened this issue Jun 16, 2016 · 16 comments
Assignees
Labels
Bug broken, incorrect, or confusing behavior fixed-pls-verify fix is linked, bug author to confirm fix P2 Priority 2 RIoT Relates to integration with cloud providers, hypervisors, API-based services, etc. severity-high 2nd top severity, seen by most users, causes major problems severity-medium 3rd level, incorrect or bad functionality, confusing and lacks a work around
Milestone

Comments

@fooka03
Copy link

fooka03 commented Jun 16, 2016

Description of Issue/Question

After upgrading to 2016.3.1, running states with source files located in a s3 bucket configured as a fileserver backend would fail with the message: Source file salt://myfile not found Explicitly syncing the fileserver cache with salt-run fileserver.update backend=s3fs results in no files being downloaded and a warning in the logs: [salt.loaded.int.fileserver.s3fs][WARNING ][26138] S3 Error! Do you have any files in your S3 bucket? Downgrading to 2016.3.0 resolved the issue. This issue is present on both Ubuntu 14.04 and 16.04. Using AWS credentials or an IAM role both exhibit the issue. It appears to be caused by PR #33682 (looks like the only related changes in this release)

Setup

Master Config
master_id: <redacted>
max_open_files: 65000
worker_threads: 16
timeout: 30

file_roots:
  base:
    - /srv/salt/base
  development:
    - /srv/salt/dev
  prod:
    - /srv/salt/prod

top_file_merging_strategy: same
default_top: base
hash_type: sha512
file_ignore_regex:
  - '/\.svn($|/)'
  - '/\.git($|/)'
  - '/\.hg($|/)'
  - '/\.npm($|/)'

gitfs_provider: gitpython

fileserver_backend:
  - roots
  - git
  - s3fs

s3.buckets:
  - <redacted>

pillar_roots:
  base:
    - /srv/pillar/base
  development:
    - /srv/pillar/dev
  prod:
    - /srv/pillar/prod

rest_tornado:
    port: <redacted>
    address: 0.0.0.0
    backlog: 128
    debug: False
    disable_ssl: True
    webhook_disable_auth: False
Minion Config:
master: 127.0.0.1
random_master: True
hash_type: sha512

grains:
  role: saltmaster
  deployment: <redacted>
  profiles:
    - salt

master_finger: <redacted>
Machine Type:

The machine is a m4.large (2 cores, 8GB RAM) running in AWS East with an IAM role allowing for full S3 access. It is only running the salt-master and a salt-minion for itself.

Sample State File:
/etc/collectd/collectd.conf:
  file.managed:
    - source: salt://collectd/collectd.tmpl   #This is located in s3://<redacted>/base/collectd/collectd.tmpl
    - template: jinja
S3 Bucket Layout:
$ aws s3 ls s3://<redacted>
                           PRE base/
                           PRE development/
                           PRE prod/
Result:
----------
          ID: /etc/collectd/collectd.conf
    Function: file.managed
      Result: False
     Comment: Source file salt://collectd/collectd.tmpl not found
     Started: 19:23:04.849051
    Duration: 736.108 ms
     Changes:   
----------
Expected (Actual output after downgrading to 2016.3.0):
----------
          ID: /etc/collectd/collectd.conf
    Function: file.managed
      Result: True
     Comment: File /etc/collectd/collectd.conf updated
     Started: 21:48:34.508841
    Duration: 2277.07 ms
     Changes:   
              ----------
              diff: <redacted>
----------

Steps to Reproduce Issue

This happened with probably the most basic use case for an s3 file backend, and on a brand new AWS EC2 instance (using AMI: ami-13be557e). Upload file to s3://my-bucket/base/, configure master to use s3fs file backend, apply a state file that uses the uploaded file as source.

I dug into the code a bit and added some logging to figure out what was coming back from S3. First I added logging to output the s3_meta keys and the resulting meta_response object in fileserver/s3fs.py and ended up with this:

[salt.utils.lazy  ][DEBUG   ][14199] LazyLoaded s3fs.envs
[salt.fileserver  ][DEBUG   ][14199] Updating s3fs fileserver cache
[salt.loaded.int.fileserver.s3fs][DEBUG   ][14199] Refreshing buckets cache file
[requests.packages.urllib3.connectionpool][INFO    ][14199] Starting new HTTP connection (1): 169.254.169.254
[requests.packages.urllib3.connectionpool][DEBUG   ][14199] "GET /latest/meta-data/iam/security-credentials/ HTTP/1.1" 200 9
[requests.packages.urllib3.connectionpool][INFO    ][14199] Starting new HTTP connection (1): 169.254.169.254
[requests.packages.urllib3.connectionpool][DEBUG   ][14199] "GET /latest/meta-data/iam/security-credentials/PetroSalt HTTP/1.1" 200 890
[requests.packages.urllib3.connectionpool][INFO    ][14199] Starting new HTTP connection (1): 169.254.169.254
[requests.packages.urllib3.connectionpool][DEBUG   ][14199] "GET /latest/dynamic/instance-identity/document HTTP/1.1" 200 429
[salt.utils.s3    ][DEBUG   ][14199] S3 Request: https://petrocloud-releases.s3.amazonaws.com/?
[salt.utils.s3    ][DEBUG   ][14199] S3 Headers::
[salt.utils.s3    ][DEBUG   ][14199]     Authorization: AWS4-HMAC-SHA256 Credential=<redacted>, SignedHeaders=host;x-amz-date;x-amz-security-token, Signature=<redacted>
[requests.packages.urllib3.connectionpool][INFO    ][14199] Starting new HTTPS connection (1): petrocloud-releases.s3.amazonaws.com
[requests.packages.urllib3.connectionpool][DEBUG   ][14199] "GET / HTTP/1.1" 200 None
[salt.utils.s3    ][DEBUG   ][14199] S3 Response Status Code: 200
[salt.loaded.int.fileserver.s3fs][INFO    ][14199] s3_meta key: headers
[salt.loaded.int.fileserver.s3fs][WARNING ][14199] S3 Error! Do you have any files in your S3 bucket? {}
[salt.loaded.int.fileserver.s3fs][INFO    ][14199] Syncing local cache from S3...
[salt.loaded.int.fileserver.s3fs][INFO    ][14199] Sync local cache from S3 completed.

Next I tried logging what was actually contained in the response from utils/s3.py and ended up with this:

[salt.loaded.int.fileserver.s3fs][INFO    ][2614] {'headers': ['Date', 'x-amz-id-2', 'Server', 'Transfer-Encoding', 'x-amz-request-id', 'x-amz-bucket-region', 'Content-Type']}

By comparison, I get this from 2016.3.0 (truncated since it shows every file in the bucket):

[{'Name': '<redacted>'}, {'Prefix': None}, {'Marker': None}, {'MaxKeys': '1000'}, {'IsTruncated': 'false'}, {'LastModified': '<redacted>', 'ETag': '"<redacted>"', 'StorageClass': 'STANDARD', 'Key': 'base/', 'Owner': {'DisplayName': '<redacted>', 'ID': '<redacted>'}, 'Size': '0'}]

Versions Report

Master
Salt Version:
           Salt: 2016.3.1

Dependency Versions:
           cffi: Not Installed
       cherrypy: 3.5.0
       dateutil: 2.5.3
          gitdb: 0.6.4
      gitpython: 1.0.1
          ioflo: Not Installed
         Jinja2: 2.8
        libgit2: Not Installed
        libnacl: Not Installed
       M2Crypto: Not Installed
           Mako: 1.0.3
   msgpack-pure: Not Installed
 msgpack-python: 0.4.6
   mysql-python: Not Installed
      pycparser: Not Installed
       pycrypto: 2.6.1
         pygit2: Not Installed
         Python: 2.7.11+ (default, Apr 17 2016, 14:00:29)
   python-gnupg: Not Installed
         PyYAML: 3.11
          PyZMQ: 15.2.0
           RAET: Not Installed
          smmap: 0.9.0
        timelib: Not Installed
        Tornado: 4.2.1
            ZMQ: 4.1.4

System Versions:
           dist: Ubuntu 16.04 xenial
        machine: x86_64
        release: 4.4.0-22-generic
         system: Linux
        version: Ubuntu 16.04 xenial
Minion
Salt Version:
           Salt: 2016.3.1

Dependency Versions:
           cffi: Not Installed
       cherrypy: 3.5.0
       dateutil: 2.5.3
          gitdb: 0.6.4
      gitpython: 1.0.1
          ioflo: Not Installed
         Jinja2: 2.8
        libgit2: Not Installed
        libnacl: Not Installed
       M2Crypto: Not Installed
           Mako: 1.0.3
   msgpack-pure: Not Installed
 msgpack-python: 0.4.6
   mysql-python: Not Installed
      pycparser: Not Installed
       pycrypto: 2.6.1
         pygit2: Not Installed
         Python: 2.7.11+ (default, Apr 17 2016, 14:00:29)
   python-gnupg: Not Installed
         PyYAML: 3.11
          PyZMQ: 15.2.0
           RAET: Not Installed
          smmap: 0.9.0
        timelib: Not Installed
        Tornado: 4.2.1
            ZMQ: 4.1.4

System Versions:
           dist: Ubuntu 16.04 xenial
        machine: x86_64
        release: 4.4.0-22-generic
         system: Linux
        version: Ubuntu 16.04 xenial
@imcecil
Copy link

imcecil commented Jun 17, 2016

I had the same issue replacing salt/utils/aws.py and salt/utils/s3.py with the versions from 2016.3.0 fixed the issue on a test master

@Ch3LL
Copy link
Contributor

Ch3LL commented Jun 17, 2016

@fooka03 I am able to replicate this when simply running salt-run fileserver.file_list

Here is the git bisect:

4a9b23f03fb323a7c9a86017ab26b16c4f9f411b is the first bad commit
commit 4a9b23f03fb323a7c9a86017ab26b16c4f9f411b
Author: Ethan Moore <github@proxyman.com>
Date:   Fri May 27 21:40:31 2016 +0000

    first go at having requests use streaming for get/put requests

:040000 040000 f8f13de5f839b5067d13db882d515a02d610a5a0 90e23b0b77b7c12537d57a1fc3bc40feec78cba7 M      salt

@meggiebot meggiebot added this to the C 7 milestone Jun 17, 2016
@meggiebot meggiebot added ZRELEASED - 2016.3.2 Bug broken, incorrect, or confusing behavior severity-medium 3rd level, incorrect or bad functionality, confusing and lacks a work around labels Jun 17, 2016
@stanislavb
Copy link
Contributor

Hi. This broke my salt masters using S3 backend for states and pillars.

Salt master log

2016-06-20 08:35:56,645 [salt.fileserver                                      ][DEBUG   ][28167] Updating s3fs fileserver cache
2016-06-20 08:35:56,646 [salt.loaded.int.fileserver.s3fs                      ][DEBUG   ][28167] Refreshing buckets cache file
2016-06-20 08:35:56,646 [salt.utils.s3                                        ][DEBUG   ][28167] S3 Request: https://salt-states-bucket.s3-eu-west-1.amazonaws.com/?
2016-06-20 08:35:56,646 [salt.utils.s3                                        ][DEBUG   ][28167] S3 Headers::
2016-06-20 08:35:56,646 [salt.utils.s3                                        ][DEBUG   ][28167]     Authorization: AWS4-HMAC-SHA256 Credential=<redacted>/20160620/eu-west-1/s3/aws4_request, SignedHeaders=host;x-amz-date;x-amz-security-token, Signature=<redacted>
2016-06-20 08:35:56,648 [requests.packages.urllib3.connectionpool             ][INFO    ][28167] Starting new HTTPS connection (1): salt-states-bucket.s3-eu-west-1.amazonaws.com
2016-06-20 08:35:56,851 [requests.packages.urllib3.connectionpool             ][DEBUG   ][28167] "GET / HTTP/1.1" 200 None
2016-06-20 08:35:56,852 [salt.utils.s3                                        ][DEBUG   ][28167] S3 Response Status Code: 200
2016-06-20 08:35:56,854 [salt.loaded.int.fileserver.s3fs                      ][WARNING ][28167] S3 Error! Do you have any files in your S3 bucket?
2016-06-20 08:35:56,854 [salt.loaded.int.fileserver.s3fs                      ][INFO    ][28167] Syncing local cache from S3...
2016-06-20 08:35:56,854 [salt.loaded.int.fileserver.s3fs                      ][INFO    ][28167] Sync local cache from S3 completed.

salt --versions-report

Salt Version:
           Salt: 2016.3.1

Dependency Versions:
           cffi: Not Installed
       cherrypy: 3.2.2
       dateutil: Not Installed
          gitdb: Not Installed
      gitpython: Not Installed
          ioflo: Not Installed
         Jinja2: 2.7.3
        libgit2: Not Installed
        libnacl: Not Installed
       M2Crypto: Not Installed
           Mako: Not Installed
   msgpack-pure: Not Installed
 msgpack-python: 0.4.6
   mysql-python: Not Installed
      pycparser: Not Installed
       pycrypto: 2.6.1
         pygit2: Not Installed
         Python: 2.6.9 (unknown, Dec 17 2015, 01:08:55)
   python-gnupg: Not Installed
         PyYAML: 3.11
          PyZMQ: 14.5.0
           RAET: Not Installed
          smmap: Not Installed
        timelib: Not Installed
        Tornado: 4.2.1
            ZMQ: 4.0.5

System Versions:
           dist:
        machine: x86_64
        release: 4.4.11-23.53.amzn1.x86_64
         system: Linux
        version: Not Installed

@Ch3LL Ch3LL added severity-high 2nd top severity, seen by most users, causes major problems P2 Priority 2 RIoT Relates to integration with cloud providers, hypervisors, API-based services, etc. labels Jun 20, 2016
@fooka03
Copy link
Author

fooka03 commented Jun 21, 2016

So tentatively it looks like I may have solved the issue simply by changing the logic around when to use streams in GET requests in salt/utils/s3.py. Replacing this:

elif method == 'GET' and not return_bin:

with this:

elif method == 'GET' and local_file:

Seems to do the trick as far as getting past the issue and also not breaking #33599 However it needs more cleanup as when salt/fileserver/s3fs.py is checking a file against the cache it'll spam the logs for each chunk (skipped download since cached file size equal to and mtime after s3 values). I'll keep working at it but wanted to share my progress for anyone else that might be interested or might have advice for me for how to tackle this part.

@lomeroe
Copy link
Contributor

lomeroe commented Jun 22, 2016

@fooka03 @stanislavb @imcecil Huge appologies to all, this is totally my fault. I just discovered today that my code borked this up (I apparently didn't test it properly).

I believe "elif methid == 'GET' and local_file and not return_bin" is the correct fix for that line of code, it corrects the issue for me and does not get the log spam @fooka03 describes

@fooka03
Copy link
Author

fooka03 commented Jun 22, 2016

@lomeroe I wouldn't be so quick on the draw here as I got the OOM issue on my salt-master with this code change. The strange thing was when I had multiple files in my bucket it seemed to be fine, but when I switched to using a test bucket with only the huge file it died on startup. There's something else at play here and I've been trying to dig through the internals to figure it out. I suspect this whole chunk in salt/fileserver/s3fs.py needs some TLC:

else:
    cached_file_stat = os.stat(cached_file_path)
    cached_file_size = cached_file_stat.st_size
    cached_file_mtime = datetime.datetime.fromtimestamp(
    cached_file_stat.st_mtime)

    cached_file_lastmod = datetime.datetime.strptime(
        file_meta['LastModified'], '%Y-%m-%dT%H:%M:%S.%fZ')
    if (cached_file_size == int(file_meta['Size']) and
            cached_file_mtime > cached_file_lastmod):
        log.debug('cached file size equal to metadata size and '
                  'cached file mtime later than metadata last '
                  'modification time.')
        ret = s3.query(
            key=key,
            keyid=keyid,
            kms_keyid=keyid,
            method='HEAD',
            bucket=bucket_name,
            service_url=service_url,
            verify_ssl=verify_ssl,
            location=location,
            path=_quote(path),
            local_file=cached_file_path,
            full_headers=True
        )
        if ret is not None:
             for header_name, header_value in ret['headers'].items():
                name = header_name.strip()
                value = header_value.strip()
                if str(name).lower() == 'last-modified':
                    s3_file_mtime = datetime.datetime.strptime(
                        value, '%a, %d %b %Y %H:%M:%S %Z')
                elif str(name).lower() == 'content-length':
                    s3_file_size = int(value)
            if (cached_file_size == s3_file_size and
                    cached_file_mtime > s3_file_mtime):
                log.info(
                    '{0} - {1} : {2} skipped download since cached file size '
                    'equal to and mtime after s3 values'.format(
                        bucket_name, saltenv, path))
                return

Namely in that it's making that call to s3 and downloading the file every time even though it only needs the headers. In my case this is happening multiple times and creating a new connection to s3 every time. Later today if I have some time I was going to start abusing the traceback module to try and get to the bottom of it.

Test Setup

AWS Ubuntu 16.04 t2.micro instance (ami-13be557e) running 2016.3.1 with the hotfix applied
S3 bucket with a 1GB file created using fallocate -l 1G big_file
State file:

/opt/test:
  file.managed:
    - source: salt://big_file

Command: salt '*' state.apply big_file

@lomeroe
Copy link
Contributor

lomeroe commented Jun 22, 2016

@fooka03 interesting, memory footprint seemed okay in my big file test (but I didn't test it in that same manner)...I'll look deeper as well.

@lomeroe
Copy link
Contributor

lomeroe commented Jun 22, 2016

@fooka03 here's what I'm seeing on the cache miss/multiple downloads (let me know if you see different).

ETag cache is missing on the hash check b/c of amultipart upload (in my scenario at least). Etag is apparently calculated differently on multipart upload, so the hash has a '-' in it and the s3fs module ignores it...strangely, the S3 console shows the "standard" ETag, but querying the metadata download has the multipart ETag...

The size/date check is then failing b/c the cached file mtime is in local time and the S3 metadata file time is in GMT time.

Honestly, I'm not seeing how these weren't an issue before the pyrequests streaming was added to the s3 utils module...

@fooka03
Copy link
Author

fooka03 commented Jun 22, 2016

Yea that seems like what I'm getting. Here's the log output:

ETAG: d029b1d579e2ed5c5c818db701ef72df-16
CACHE MTIME: 2016-06-22 19:25:50.437793
S3 LAST MOD: 2016-06-21 16:54:39

@fooka03
Copy link
Author

fooka03 commented Jun 22, 2016

@lomeroe The code for me at least makes the proper adjustment for UTC vs CDT. I think to reduce the impact of the ETAG misses (since they're just going to be a fact of life with multi-part uploads) we could add in the ability to do a HEAD request between s3.py and s3fs.py instead of a GET which would only return the metadata of the object. Additionally in regards to the ETAG logic it would probably be beneficial to allow for a supplied hash/algorithm to perform the check against (as that mechanism exists in file.managed) which would give the option for a user provided way to bypass the excessive s3 calls though at this point we're probably getting outside the scope of this issue.

So back to the task at hand, here's what I'm getting in the system log when I run the above test:

[   59.820456] Out of memory: Kill process 1108 (salt-master) score 64 or sacrifice child
[   59.824693] Killed process 1108 (salt-master) total-vm:804636kB, anon-rss:66184kB, file-rss:1152kB
[   59.950763] Out of memory: Kill process 1150 (salt-master) score 62 or sacrifice child
[   59.954738] Killed process 1150 (salt-master) total-vm:275412kB, anon-rss:63400kB, file-rss:1592kB
[   60.869935] Out of memory: Kill process 1151 (salt-master) score 62 or sacrifice child
[   60.874159] Killed process 1151 (salt-master) total-vm:275420kB, anon-rss:63400kB, file-rss:1564kB
[   61.816587] Out of memory: Kill process 1149 (salt-master) score 62 or sacrifice child
[   61.820600] Killed process 1149 (salt-master) total-vm:275412kB, anon-rss:63388kB, file-rss:1492kB
[   62.031248] Out of memory: Kill process 1147 (salt-master) score 62 or sacrifice child
[   62.035159] Killed process 1147 (salt-master) total-vm:275404kB, anon-rss:63380kB, file-rss:1608kB

The instance at this point is dead and needs to be restarted.

@lomeroe
Copy link
Contributor

lomeroe commented Jun 23, 2016

@fooka03 out of curiosity is your system's timezone set to UTC?
Is your log getting spammed about the file download being skipped due to mtime/size being equal?

Edit to add: sorry, I see above you noted already that you are getting that log spam...

@meggiebot meggiebot modified the milestones: C 8, C 7 Jun 23, 2016
@meggiebot meggiebot removed the fixed-pls-verify fix is linked, bug author to confirm fix label Jun 23, 2016
@lomeroe
Copy link
Contributor

lomeroe commented Jun 24, 2016

@fooka03 if you are getting the log spam on the download being skipped due to size/mtime, then at least the file was successfully downloaded from S3 and the OOM issue is at least not being caused by the utils.s3 downloading the file (the original intent of #33682 seems to be working).

That said, I agree that the utils.s3 update to using pyrequests streaming has definitely exposed other things inside the s3fs module. The function which contains the code you referenced above (function _get_file_from_s3 which is called from find_file) is run for each chunk that is returned from the master to the minion when the file is served. On files that are multipart uploaded, that function compares locally cached metadata vs the cached file as well as re-pulling the metadata from S3 and comparing again (method="HEAD" in the s3.query). On a 1GB file, that's 1024 checks with the default file buffer size. If the 1GB file wasn't multipart uploaded, on each chunk the md5 would be calculated for the cached file and compared to the cached metadata. For small files, it's no big deal, but for large files it really takes a toll.

On a smaller system (t2.small for me), if the s3fs fileserver bucket is very big (mine is ~4GB with my 1.7GB "big file" and about 1000 total files), IO is consumed by the cache update/checking (all the files get downloaded successfully into the cache, but then the cache checksum/metadata checking takes so long it can't keep up with much of anything else). I get SaltReqTimeouts when even trying to run a state. It also only has around 70MB of free memory after starting up, so I can see how your micro would easily run out of memory once much of anything started happening.

Using a t2.large, that can push a little more IO/CPU and keep up with the s3fs update() calls on my fileserver bucket:

My 1.7GB test file being served off S3 via the s3fs module (i.e. salt://big_file) takes around one hour to complete the file.managed state. Changing _get_file_from_s3 to only check if the file exists in the cache (and download if not) knocks it down to ~15 minutes if the file has already been cached down from S3, but I don't know if that appropriate or is something more robust should be happening.

In comparison, using

/tmp/test:
  file.managed:
    - source: s3://my-fileserver-bucket/big_file
    - source-hash: big_file_hash

to manage the same 1.7GB file averages about 4 minutes in my testing.

A third comparison, using the standard file system based fileserver with:

/tmp/test:
  file.managed:
    - source: salt://big_file

Runs in about 30 seconds...

It also appears to me, that during the file.managed state, with the salt:// path and the s3fs backend, the file is served two times by the master. Perhaps that is what is supposed to be done, but it just exasperates the issue on a really big file like this.

This may warrant a separate issue for s3fs, as I think the initially reported issue (which was caused by the changes to utils.s3.query) is fixed with the above PRs.

@gtmanfred gtmanfred added this to the C 7 milestone Jun 27, 2016
@gtmanfred gtmanfred removed this from the C 8 milestone Jun 27, 2016
@gtmanfred
Copy link
Contributor

gtmanfred commented Jun 29, 2016

Also appears to be broken on CentOS 7.

[root@lxc ~]# cat /etc/redhat-release
CentOS Linux release 7.2.1511 (Core)
[root@lxc ~]# salt --versions-report
Salt Version:
           Salt: 2016.3.1

Dependency Versions:
           cffi: Not Installed
       cherrypy: Not Installed
       dateutil: Not Installed
          gitdb: Not Installed
      gitpython: Not Installed
          ioflo: Not Installed
         Jinja2: 2.7.2
        libgit2: Not Installed
        libnacl: Not Installed
       M2Crypto: 0.21.1
           Mako: Not Installed
   msgpack-pure: Not Installed
 msgpack-python: 0.4.7
   mysql-python: Not Installed
      pycparser: Not Installed
       pycrypto: 2.6.1
         pygit2: Not Installed
         Python: 2.7.5 (default, Nov 20 2015, 02:00:19)
   python-gnupg: Not Installed
         PyYAML: 3.11
          PyZMQ: 14.7.0
           RAET: Not Installed
          smmap: Not Installed
        timelib: Not Installed
        Tornado: 4.2.1
            ZMQ: 4.0.5

System Versions:
           dist: centos 7.2.1511 Core
        machine: x86_64
        release: 3.10.0-327.18.2.el7.x86_64
         system: Linux
        version: CentOS Linux 7.2.1511 Core

but does appear to be fixed with the above patches

@gtmanfred
Copy link
Contributor

@fooka03 @lomeroe Hey guys,

It looks like the initial error with this issue has been solved in the 2016.3 branch, with the

[salt.loaded.int.fileserver.s3fs][WARNING ][14199] S3 Error! Do you have any files in your S3 bucket? {}

If it is, can we close out this issue, and open another one about the OOM issue for streaming large files.

Thanks!
Daniel

@meggiebot meggiebot added fixed-pls-verify fix is linked, bug author to confirm fix and removed ZRELEASED - 2015.8.11 ZRELEASED - 2016.3.2 labels Jun 29, 2016
@fooka03
Copy link
Author

fooka03 commented Jun 29, 2016

@gtmanfred @lomeroe I'm good with marking this one closed and opening a new issue for the other stuff. I'll work on getting the new issue created tomorrow unless someone beats me to the punch before then.

@gtmanfred
Copy link
Contributor

Awesome! Thanks!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Bug broken, incorrect, or confusing behavior fixed-pls-verify fix is linked, bug author to confirm fix P2 Priority 2 RIoT Relates to integration with cloud providers, hypervisors, API-based services, etc. severity-high 2nd top severity, seen by most users, causes major problems severity-medium 3rd level, incorrect or bad functionality, confusing and lacks a work around
Projects
None yet
Development

No branches or pull requests

8 participants