Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Can't upload on s3 bucket. #1631

Closed
demshin opened this issue Nov 20, 2020 · 16 comments
Closed

Can't upload on s3 bucket. #1631

demshin opened this issue Nov 20, 2020 · 16 comments

Comments

@demshin
Copy link
Contributor

demshin commented Nov 20, 2020

Hello!

Describe the bug
I've run seaweed with s3 (3 masters, 3 volumes, 3 filers on the same servers).
It works good.
Some time later I have a problem.
Create s3-bucket with s3cmd:
s3cmd mb s3://test
It succefully created.
That I tried to upload some file.
s3cmd put file s3://test
I've got error:
WARNING: Upload failed: /file (500 (InternalError): We encountered an internal error, please try again.)
But on older bucket I can upload files.

System Setup

  • List the command line to start "weed master", "weed volume", "weed filer", "weed s3", "weed mount".
    /opt/seaweedfs/weed master -mdir=/data/seaweedfs/master -peers=10.214.3.19:9333,10.214.3.16:9333,10.214.3.17:9333 -volumeSizeLimitMB 1024
    /opt/seaweedfs/weed volume -mserver=10.214.3.19:9333,10.214.3.16:9333,10.214.3.17:9333 -dir=/data/seaweedfs/volume -dataCenter dc1 -rack rack1 -ip=10.214.3.16 -max=0
    /opt/seaweedfs/weed filer -master 10.214.3.19:9333,10.214.3.16:9333,10.214.3.17:9333 -s3 -s3.config /etc/seaweedfs/s3.config.json -s3.domainName example.com -s3.port 80
  • OS version
    CentOS Linux release 7.8.2003 (Core)
  • output of weed version
    version 30GB 2.11 98827d6 linux amd64
  • if using filer, show the content of filer.toml
    Here is part of filer.toml
[postgres] # or cockroachdb
# CREATE TABLE IF NOT EXISTS filemeta (
#   dirhash     BIGINT,
#   name        VARCHAR(65535),
#   directory   VARCHAR(65535),
#   meta        bytea,
#   PRIMARY KEY (dirhash, name)
# );
enabled = "True"
hostname = "10.214.3.19"
port = 5432
username = "seaweedfs_user"
password = "SECRET_PASSWORD"
database = "seaweedfs_db"              # create or use an existing database
sslmode = "disable"
connection_max_idle = 100
connection_max_open = 100

Expected behavior
I can upload file to s3.

Additional context
Also I have the same problem with another s3 tools (aws, s3 sync)

@demshin
Copy link
Contributor Author

demshin commented Nov 20, 2020

Here is debug log: s3cmd put file s3:\\test -d

DEBUG: s3cmd version 2.1.0
DEBUG: ConfigParser: Reading file '/home/aleksandr.demshin/.s3cfg'
DEBUG: ConfigParser: access_key->96...17_chars...B
DEBUG: ConfigParser: access_token->
DEBUG: ConfigParser: add_encoding_exts->
DEBUG: ConfigParser: add_headers->
DEBUG: ConfigParser: bucket_location->US
DEBUG: ConfigParser: ca_certs_file->
DEBUG: ConfigParser: cache_file->
DEBUG: ConfigParser: check_ssl_certificate->True
DEBUG: ConfigParser: check_ssl_hostname->True
DEBUG: ConfigParser: cloudfront_host->cloudfront.amazonaws.com
DEBUG: ConfigParser: connection_pooling->True
DEBUG: ConfigParser: content_disposition->
DEBUG: ConfigParser: content_type->
DEBUG: ConfigParser: default_mime_type->binary/octet-stream
DEBUG: ConfigParser: delay_updates->False
DEBUG: ConfigParser: delete_after->False
DEBUG: ConfigParser: delete_after_fetch->False
DEBUG: ConfigParser: delete_removed->False
DEBUG: ConfigParser: dry_run->False
DEBUG: ConfigParser: enable_multipart->True
DEBUG: ConfigParser: encrypt->False
DEBUG: ConfigParser: expiry_date->
DEBUG: ConfigParser: expiry_days->
DEBUG: ConfigParser: expiry_prefix->
DEBUG: ConfigParser: follow_symlinks->False
DEBUG: ConfigParser: force->False
DEBUG: ConfigParser: get_continue->False
DEBUG: ConfigParser: gpg_command->/usr/bin/gpg
DEBUG: ConfigParser: gpg_decrypt->%(gpg_command)s -d --verbose --no-use-agent --batch --yes --passphrase-fd %(passphrase_fd)s -o %(output_file)s %(input_file)s
DEBUG: ConfigParser: gpg_encrypt->%(gpg_command)s -c --verbose --no-use-agent --batch --yes --passphrase-fd %(passphrase_fd)s -o %(output_file)s %(input_file)s
DEBUG: ConfigParser: gpg_passphrase->...-3_chars...
DEBUG: ConfigParser: guess_mime_type->True
DEBUG: ConfigParser: host_base->example.com:443
DEBUG: ConfigParser: host_bucket->example.com:443
DEBUG: ConfigParser: human_readable_sizes->False
DEBUG: ConfigParser: invalidate_default_index_on_cf->False
DEBUG: ConfigParser: invalidate_default_index_root_on_cf->True
DEBUG: ConfigParser: invalidate_on_cf->False
DEBUG: ConfigParser: kms_key->
DEBUG: ConfigParser: limit->-1
DEBUG: ConfigParser: limitrate->0
DEBUG: ConfigParser: list_md5->False
DEBUG: ConfigParser: log_target_prefix->
DEBUG: ConfigParser: long_listing->False
DEBUG: ConfigParser: max_delete->-1
DEBUG: ConfigParser: mime_type->
DEBUG: ConfigParser: multipart_chunk_size_mb->15
DEBUG: ConfigParser: multipart_max_chunks->10000
DEBUG: ConfigParser: preserve_attrs->True
DEBUG: ConfigParser: progress_meter->True
DEBUG: ConfigParser: proxy_host->
DEBUG: ConfigParser: proxy_port->0
DEBUG: ConfigParser: public_url_use_https->False
DEBUG: ConfigParser: put_continue->False
DEBUG: ConfigParser: recursive->False
DEBUG: ConfigParser: recv_chunk->65536
DEBUG: ConfigParser: reduced_redundancy->False
DEBUG: ConfigParser: requester_pays->False
DEBUG: ConfigParser: restore_days->1
DEBUG: ConfigParser: restore_priority->Standard
DEBUG: ConfigParser: secret_key->DE...37_chars...U
DEBUG: ConfigParser: send_chunk->65536
DEBUG: ConfigParser: server_side_encryption->False
DEBUG: ConfigParser: signature_v2->False
DEBUG: ConfigParser: signurl_use_https->False
DEBUG: ConfigParser: simpledb_host->sdb.amazonaws.com
DEBUG: ConfigParser: skip_existing->False
DEBUG: ConfigParser: socket_timeout->300
DEBUG: ConfigParser: stats->False
DEBUG: ConfigParser: stop_on_error->False
DEBUG: ConfigParser: storage_class->
DEBUG: ConfigParser: throttle_max->100
DEBUG: ConfigParser: upload_id->
DEBUG: ConfigParser: urlencoding_mode->normal
DEBUG: ConfigParser: use_http_expect->False
DEBUG: ConfigParser: use_https->True
DEBUG: ConfigParser: use_mime_magic->True
DEBUG: ConfigParser: verbosity->WARNING
DEBUG: ConfigParser: website_endpoint->http://%(bucket)s.s3-website-%(location)s.amazonaws.com/
DEBUG: ConfigParser: website_error->
DEBUG: ConfigParser: website_index->index.html
DEBUG: Updating Config.Config cache_file ->
DEBUG: Updating Config.Config follow_symlinks -> False
DEBUG: Updating Config.Config verbosity -> 10
DEBUG: Unicodising 'put' using UTF-8
DEBUG: Unicodising '.s3cfg' using UTF-8
DEBUG: Unicodising 's3://temp' using UTF-8
DEBUG: Command: put
INFO: No cache file found, creating it.
DEBUG: DeUnicodising u'.s3cfg' using UTF-8
INFO: Compiling list of local files...
DEBUG: DeUnicodising u'.s3cfg' using UTF-8
DEBUG: Unicodising '.s3cfg' using UTF-8
DEBUG: DeUnicodising u'.s3cfg' using UTF-8
DEBUG: DeUnicodising u'.s3cfg' using UTF-8
DEBUG: Unicodising '' using UTF-8
DEBUG: DeUnicodising u'.s3cfg' using UTF-8
DEBUG: Unicodising '.s3cfg' using UTF-8
DEBUG: DeUnicodising u'.s3cfg' using UTF-8
DEBUG: DeUnicodising u'.s3cfg' using UTF-8
DEBUG: Applying --exclude/--include
DEBUG: CHECK: .s3cfg
DEBUG: PASS: u'.s3cfg'
INFO: Running stat() and reading/calculating MD5 values on 1 files, this may take some time...
DEBUG: DeUnicodising u'.s3cfg' using UTF-8
DEBUG: doing file I/O to read md5 of .s3cfg
DEBUG: DeUnicodising u'.s3cfg' using UTF-8
INFO: Summary: 1 local files to upload
DEBUG: String 'aleksandr.demshin' encoded to 'aleksandr.demshin'
WARNING: .s3cfg: Owner groupname not known. Storing GID=1806750630 instead.
DEBUG: attr_header: {'x-amz-meta-s3cmd-attrs': u'atime:1605870513/ctime:1605772733/gid:1806750630/md5:65bf41f4a96df36bbbf53467cc704482/mode:33152/mtime:1605772733/uid:1806750630/uname:aleksandr.demshin'}
DEBUG: DeUnicodising u'.s3cfg' using UTF-8
DEBUG: DeUnicodising u'.s3cfg' using UTF-8
DEBUG: DeUnicodising u'.s3cfg' using UTF-8
DEBUG: CreateRequest: resource[uri]=/.s3cfg
DEBUG: ===== SEND Inner request to determine the bucket region =====
DEBUG: CreateRequest: resource[uri]=/
DEBUG: Using signature v4
DEBUG: get_hostname(temp): example.com
DEBUG: canonical_headers = host:example.com
x-amz-content-sha256:e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855
x-amz-date:20201120T152440Z

DEBUG: Canonical Request:
GET
/temp/
location=
host:example.com
x-amz-content-sha256:e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855
x-amz-date:20201120T152440Z

host;x-amz-content-sha256;x-amz-date
e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855
----------------------
DEBUG: signature-v4 headers: {'x-amz-content-sha256': u'e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855', 'Authorization': u'AWS4-HMAC-SHA256 Credential=96RIVH4GG8LVQ6Y2E5NB/20201120/us-east-1/s3/aws4_request,SignedHeaders=host;x-amz-content-sha256;x-amz-date,Signature=b868083a29260c862c0daacd678d4bb087f436fa874359ebf6d1ddc4f90a3fdb', 'x-amz-date': '20201120T152440Z'}
DEBUG: Processing request, please wait...
DEBUG: get_hostname(temp): example.com
DEBUG: ConnMan.get(): creating new connection: https://example.com
DEBUG: Using ca_certs_file None
DEBUG: httplib.HTTPSConnection() has both context and check_hostname
DEBUG: non-proxied HTTPSConnection(example.com, None)
DEBUG: format_uri(): /temp/?location
DEBUG: Sending request method_string='GET', uri=u'/temp/?location', headers={'x-amz-content-sha256': u'e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855', 'Authorization': u'AWS4-HMAC-SHA256 Credential=96RIVH4GG8LVQ6Y2E5NB/20201120/us-east-1/s3/aws4_request,SignedHeaders=host;x-amz-content-sha256;x-amz-date,Signature=b868083a29260c862c0daacd678d4bb087f436fa874359ebf6d1ddc4f90a3fdb', 'x-amz-date': '20201120T152440Z'}, body=(0 bytes)
DEBUG: ConnMan.put(): connection put back to pool (https://example.com#1)
DEBUG: Response:
{'data': '<?xml version="1.0" encoding="UTF-8"?>\n<ListBucketResult xmlns="http://s3.amazonaws.com/doc/2006-03-01/"><Name>temp</Name><Prefix></Prefix><Marker></Marker><MaxKeys>10000</MaxKeys><IsTruncated>false</IsTruncated></ListBucketResult>',
 'headers': {'accept-ranges': 'bytes',
             'connection': 'keep-alive',
             'content-length': '231',
             'content-type': 'application/xml',
             'date': 'Fri, 20 Nov 2020 15:24:40 GMT',
             'server': 'nginx',
             'x-amz-request-id': '1605885880205377331'},
 'reason': 'OK',
 'status': 200}
DEBUG: ===== SUCCESS Inner request to determine the bucket region ('us-east-1') =====
upload: '.s3cfg' -> 's3://temp/.s3cfg'  [1 of 1]
DEBUG: DeUnicodising u'.s3cfg' using UTF-8
DEBUG: Using signature v4
DEBUG: get_hostname(temp): example.com
DEBUG: canonical_headers = content-length:2145
content-type:text/plain
host:example.com
x-amz-content-sha256:e05dceee0eb16f6a3678f13402a73bd6699e3f3a05e1274b853738f6285f0295
x-amz-date:20201120T152440Z
x-amz-meta-s3cmd-attrs:atime:1605870513/ctime:1605772733/gid:1806750630/md5:65bf41f4a96df36bbbf53467cc704482/mode:33152/mtime:1605772733/uid:1806750630/uname:aleksandr.demshin
x-amz-storage-class:STANDARD

DEBUG: Canonical Request:
PUT
/temp/.s3cfg

content-length:2145
content-type:text/plain
host:example.com
x-amz-content-sha256:e05dceee0eb16f6a3678f13402a73bd6699e3f3a05e1274b853738f6285f0295
x-amz-date:20201120T152440Z
x-amz-meta-s3cmd-attrs:atime:1605870513/ctime:1605772733/gid:1806750630/md5:65bf41f4a96df36bbbf53467cc704482/mode:33152/mtime:1605772733/uid:1806750630/uname:aleksandr.demshin
x-amz-storage-class:STANDARD

content-length;content-type;host;x-amz-content-sha256;x-amz-date;x-amz-meta-s3cmd-attrs;x-amz-storage-class
e05dceee0eb16f6a3678f13402a73bd6699e3f3a05e1274b853738f6285f0295
----------------------
DEBUG: signature-v4 headers: {'x-amz-content-sha256': u'e05dceee0eb16f6a3678f13402a73bd6699e3f3a05e1274b853738f6285f0295', 'content-length': '2145', 'x-amz-storage-class': 'STANDARD', 'x-amz-meta-s3cmd-attrs': u'atime:1605870513/ctime:1605772733/gid:1806750630/md5:65bf41f4a96df36bbbf53467cc704482/mode:33152/mtime:1605772733/uid:1806750630/uname:aleksandr.demshin', 'x-amz-date': '20201120T152440Z', 'content-type': 'text/plain', 'Authorization': u'AWS4-HMAC-SHA256 Credential=96RIVH4GG8LVQ6Y2E5NB/20201120/us-east-1/s3/aws4_request,SignedHeaders=content-length;content-type;host;x-amz-content-sha256;x-amz-date;x-amz-meta-s3cmd-attrs;x-amz-storage-class,Signature=94bfdd418eb5feaa2b4495cd33f5d926c8cf86d352dbe98adbc103d0d897d732'}
DEBUG: get_hostname(temp): example.com
DEBUG: ConnMan.get(): re-using connection: https://example.com#1
DEBUG: format_uri(): /temp/.s3cfg
 2145 of 2145   100% in    0s     2.44 MB/sDEBUG: ConnMan.put(): connection put back to pool (https://example.com#2)
DEBUG: Response:
{'data': '<?xml version="1.0" encoding="UTF-8"?>\n<Error><Code>InternalError</Code><Message>We encountered an internal error, please try again.</Message><Resource>/temp/.s3cfg</Resource><RequestId>1605885880213233340</RequestId></Error>',
 'headers': {'accept-ranges': 'bytes',
             'connection': 'keep-alive',
             'content-length': '225',
             'content-type': 'application/xml',
             'date': 'Fri, 20 Nov 2020 15:24:40 GMT',
             'server': 'nginx',
             'x-amz-request-id': '1605885880213303096'},
 'reason': 'Internal Server Error',
 'size': 2145,
 'status': 500}
 2145 of 2145   100% in    0s   452.03 KB/s  done
DEBUG: S3Error: 500 (Internal Server Error)
DEBUG: HttpHeader: content-length: 225
DEBUG: HttpHeader: accept-ranges: bytes
DEBUG: HttpHeader: server: nginx
DEBUG: HttpHeader: connection: keep-alive
DEBUG: HttpHeader: x-amz-request-id: 1605885880213303096
DEBUG: HttpHeader: etag:
DEBUG: HttpHeader: date: Fri, 20 Nov 2020 15:24:40 GMT
DEBUG: HttpHeader: content-type: application/xml
DEBUG: ErrorXML: Code: 'InternalError'
DEBUG: ErrorXML: Message: 'We encountered an internal error, please try again.'
DEBUG: ErrorXML: Resource: '/temp/.s3cfg'
DEBUG: ErrorXML: RequestId: '1605885880213233340'
WARNING: Upload failed: /.s3cfg (500 (InternalError): We encountered an internal error, please try again.)
WARNING: Waiting 3 sec...
^CSee ya!

@kmlebedev
Copy link
Contributor

please show weed logs

@demshin
Copy link
Contributor Author

demshin commented Nov 20, 2020

$ sudo tail /data/seaweedfs/master/log
      52
	raft:join"C{"name":"10.214.3.16:9333","connectionString":"10.214.3.16:19333"}
       c
raft:nop

@kmlebedev
Copy link
Contributor

Need a log weed filer and weed filer volume with uploading file

@demshin
Copy link
Contributor Author

demshin commented Nov 20, 2020

Sorry, where log? How can I enable logs?

@kmlebedev
Copy link
Contributor

Sorry, where log? How can I enable logs?

use param -logtostderr=true

@chrislusf
Copy link
Collaborator

You are not configuring end point and are talking to aws directly.

@demshin
Copy link
Contributor Author

demshin commented Nov 20, 2020

You are not configuring end point and are talking to aws directly.

No. I configured endpoint. s3cmd woriking correctly with another bucket (created early)

@chrislusf
Copy link
Collaborator

how did you configure the endpoint? I saw 'server': 'nginx' in the header. Seems there are some other configuration going on.

@demshin
Copy link
Contributor Author

demshin commented Nov 20, 2020

use param -logtostderr=true

Thanks! I'm install seaweed as systemd service and use journalctl -u seaweedfs.filer.service -n -f.
I've got errors:

Nov 20 18:49:37 s2375.j weed[31818]: E1120 18:49:37 31818 filer_server_handlers_write.go:42] failing to assign a file id: rpc error: code = Unknown desc = No free volumes left!
Nov 20 18:49:37 s2375.j weed[31818]: I1120 18:49:37 31818 common.go:53] response method:PUT URL:/buckets/dev-passport-video-recordings/02342a46-7435-b698-2437-c778db34ef59.mp4 with httpStatus:500 and JSON:{"error":"rpc error: code = Unknown desc = No free volumes left!"}
Nov 20 18:49:37 s2375.j weed[31818]: E1120 18:49:37 31818 s3api_object_handlers.go:336] upload to filer error: rpc error: code = Unknown desc = No free volumes left!

What does it mean? I have a lot if free disk space.

@kmlebedev
Copy link
Contributor

kmlebedev commented Nov 20, 2020

try set -max=8 , and show
echo volume.list | weed shell

@chrislusf
Copy link
Collaborator

It was already set to -max=0.

How much is a lot if free disk space.?

@demshin
Copy link
Contributor Author

demshin commented Nov 20, 2020

how did you configure the endpoint? I saw 'server': 'nginx' in the header. Seems there are some other configuration going on.

I have Nginx as proxy and balancer. It proxies requests to one of three filers (with s3).

@demshin
Copy link
Contributor Author

demshin commented Nov 20, 2020

It was already set to -max=0.

How much is a lot if free disk space.?

/dev/vdb                           50G  1.5G   46G   4% /data
/dev/vdb                          9.8G  213M  9.0G   3% /data
/dev/vdb                          9.8G   69M  9.2G   1% /data

@chrislusf
Copy link
Collaborator

Each volume is configured to be 1GB. -volumeSizeLimitMB 1024
Each bucket will create 7 volumes by default.

The folder /data/seaweedfs/volume seems having the number of volumes multiply 1GB close to the disk limit.

You can reduce to -volumeSizeLimitMB 512.

If you are using the git master branch, to be released in 2.12, there is a more flexible configuration:

If you have a lot of buckets to add, you can configure the per bucket storage this way in weed shell:

> fs.configure -locationPrefix=/buckets/ -volumeGrowthCount=1 -apply

This will add 1 physical volume when existing volumes are full. If using replication, you will need to add more volumes.

See https://github.com/chrislusf/seaweedfs/wiki/Path-Specific-Configuration

@demshin
Copy link
Contributor Author

demshin commented Nov 20, 2020

It's worked for me! Thanks Chris :)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants