Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

s3-tests failed with CEPH on Raspberry Pi #339

Closed
Ei3rb0mb3r opened this issue Jan 15, 2020 · 1 comment
Closed

s3-tests failed with CEPH on Raspberry Pi #339

Ei3rb0mb3r opened this issue Jan 15, 2020 · 1 comment

Comments

@Ei3rb0mb3r
Copy link

Hello I am trying to create a testbucket on a Ceph Raspberry Pi cluster(local) and I get the following error message:

OS:Debian Jessie
Ceph: v12.2.12 Luminous
s3cmd:2.0.2

[ceph_deploy.rgw][INFO ] The Ceph Object Gateway (RGW) is now running on host admin and default port 7480


cephuser@admin:~/mycluster $ ceph -s
  cluster:
    id:     745d44c2-86dd-4b2f-9c9c-ab50160ea353
    health: HEALTH_WARN
            too few PGs per OSD (24 < min 30)

  services:
    mon: 1 daemons, quorum admin
    mgr: admin(active)
    osd: 4 osds: 4 up, 4 in
    rgw: 1 daemon active

  data:
    pools:   4 pools, 32 pgs
    objects: 80 objects, 1.09KiB
    usage:   4.01GiB used, 93.6GiB / 97.6GiB avail
    pgs:     32 active+clean

  io:
    client:   5.83KiB/s rd, 0B/s wr, 7op/s rd, 1op/s wr

After one minute the service(rgw: 1 daemon active) is no longer visible:

cephuser@admin:~/mycluster $ ceph -s
  cluster:
    id:     745d44c2-86dd-4b2f-9c9c-ab50160ea353
    health: HEALTH_WARN
            too few PGs per OSD (24 < min 30)

  services:
    mon: 1 daemons, quorum admin
    mgr: admin(active)
    osd: 4 osds: 4 up, 4 in

  data:
    pools:   4 pools, 32 pgs
    objects: 80 objects, 1.09KiB
    usage:   4.01GiB used, 93.6GiB / 97.6GiB avail
    pgs:     32 active+clean

./s3cmd --debug mb s3://testbucket

Debug Message:

DEBUG: Unicodising 'mb' using UTF-8
DEBUG: Unicodising 's3://testbucket' using UTF-8
DEBUG: Command: mb
DEBUG: CreateRequest: resource[uri]=/
DEBUG: Using signature v2
DEBUG: SignHeaders: u'PUT\n\n\n\nx-amz-date:Wed, 15 Jan 2020 02:28:25 +0000\n/testbucket/'
DEBUG: Processing request, please wait...
DEBUG: get_hostname(testbucket): 192.168.178.50:7480
DEBUG: ConnMan.get(): creating new connection: http://192.168.178.50:7480
DEBUG: non-proxied HTTPConnection(192.168.178.50, 7480)
DEBUG: Response:


DEBUG: Unicodising './s3cmd' using UTF-8
DEBUG: Unicodising '--debug' using UTF-8
DEBUG: Unicodising 'mb' using UTF-8
DEBUG: Unicodising 's3://testbucket' using UTF-8
Invoked as: ./s3cmd --debug mb s3://testbucket
Problem: error: [Errno 111] Connection refused
S3cmd:   2.0.2
python:   2.7.17 (default, Oct 19 2019, 23:36:22)
[GCC 9.2.1 20190909]
environment LANG=en_GB.UTF-8

Traceback (most recent call last):
  File "./s3cmd", line 3092, in <module>
    rc = main()
  File "./s3cmd", line 3001, in main
    rc = cmd_func(args)
  File "./s3cmd", line 237, in cmd_bucket_create
    response = s3.bucket_create(uri.bucket(), cfg.bucket_location)
  File "/home/cephuser/s3cmd-2.0.2/S3/S3.py", line 398, in bucket_create
    response = self.send_request(request)
  File "/home/cephuser/s3cmd-2.0.2/S3/S3.py", line 1258, in send_request
    conn = ConnMan.get(self.get_hostname(resource['bucket']))
  File "/home/cephuser/s3cmd-2.0.2/S3/ConnMan.py", line 253, in get
    conn.c.connect()
  File "/usr/lib/python2.7/httplib.py", line 831, in connect
    self.timeout, self.source_address)
  File "/usr/lib/python2.7/socket.py", line 575, in create_connection
    raise err
error: [Errno 111] Connection refused

Does anyone know about the error ?

@Ei3rb0mb3r
Copy link
Author

Ei3rb0mb3r commented Jan 15, 2020

Solution:
On the gateway node, open the Ceph configuration file in the /etc/ceph/ directory.

Find an RGW client section similar to the example:

[client.rgw.gateway-node1]
host = gateway-node1
keyring = /var/lib/ceph/radosgw/ceph-rgw.gateway-node1/keyring
log file = /var/log/ceph/ceph-rgw-gateway-node1.log
rgw frontends = civetweb port=192.168.178.50:8080 num_threads=100

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant