Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Dashboard - NFS - No cluster available #4467

Closed
skunkr opened this issue Sep 25, 2019 · 15 comments
Closed

Dashboard - NFS - No cluster available #4467

skunkr opened this issue Sep 25, 2019 · 15 comments

Comments

@skunkr
Copy link

skunkr commented Sep 25, 2019

Bug Report

Hi,
I have installed from stable-4.0, and everything went well, but I just can't create NFS exports from dashboard UI.
It says "No cluster available", although there is already an export configured with rgw backend.
The only command that i've issued is:
"ceph dashboard set-ganesha-clusters-rados-pool-namespace .rgw.root"
On the nfs enabled nodes, after installation i can see "ganesha.conf", 'rgw_bucket.conf", "rgw.conf" created under /etc/ganesha/
I have uploaded these files as rados objects on the ".rgw.root" pool, but still the UI says "-- No cluster available --"
I think I am missing something, but I'm also not very sure how to sort this.
Any thoughts ?
Thank you,

Leo

@dsavineau
Copy link
Contributor

We probably need to investigate more on the Ganesha configuration in the Ceph Dashboard.

https://docs.ceph.com/docs/master/mgr/dashboard/#nfs-ganesha-management

As you mentioned we need to run one extra ceph dashboard command but also update the ganesha configuration.

@skunkr
Copy link
Author

skunkr commented Sep 26, 2019

Thank you Dimitri,
But i think it is also a lack of knowledge on my side.
What would be the exact extra manual steps that need to be done to have the NFS service added in Dashboard ? I have read the documentation, but it seems a bit confusing for me, regarding to what actually files with what content to place as objects.
Or this should be automatically done by the playbook ?

@dsavineau
Copy link
Contributor

From my point of view, this should be done automatically by the playbook.
That said we will need to dig a little bit more more in this area because we also lack of knowledge on this part.

@KingJ
Copy link

KingJ commented Oct 5, 2019

I was about to log a new bug related to this, however I saw this as a related issue and thought it best to group it.

To build up a list of NFS daemons, the dashboard looks in the configured RADOS pool for conf-* objects. The objects can, and should, be empty - the dashboard is only looking for them by name. If there are no conf-* files in the pool, then the dashboard will report no daemons are available.

Bug 1: The ceph-nfs role does not create new conf-* objects for hosts in the [nfss] group in any RADOS pool. It only creates a ganesha-export-index file in the default CephFS data pool if it doesn't already exist.

When a daemon is available and an export is created via the dashboard, two things then happen.

  1. A new export-<id> object is created in the configured RADOS pool, containing the NFS Ganesha export configuration. For example;
rados -p nfs-ganesha get export-1 -
EXPORT {
    FSAL {
        secret_access_key = "REMOVED";
        user_id = "media";
        name = "CEPH";
        filesystem = "cephfs";
    }

    pseudo = "/media";
    squash = "no_root_squash";
    access_type = "RW";
    path = "/media";
    export_id = 1;
    transports = "UDP", "TCP";
    protocols = 4;
}
  1. For each daemon selected for the export on the dashboard, the conf-<nodename> object has the RADOS URL of the new export appended. For example;
rados -p nfs-ganesha get conf-allmight -
%url "rados://nfs-ganesha/export-1"

Each NFS Ganesha server should read the export-<nodename> object, then read the export objects contained in that file and apply them.

Bug 2: The Ganesha configuration file template explicitly sets the object URL to %url rados://{{ cephfs_data_pool.name }}/{{ ceph_nfs_rados_export_index }} - i.e. rados://cephfs_data/ganesha-export-index by default. Although this object exists, having been created previously by the ceph-nfs role, it is empty. Unless the user chooses to manually populate it with export configurations, or RADOS URLs to other objects containing export configurations, then the Ganesha server will export... nothing! The URL needs to be set to %url rados://{{ cephfs_data_pool.name }}/export-{{ ansible_hostname }}

Bug 3: The Ceph dashboard uses a specific RADOS pool for NFS configurations. Currently, the ceph-nfs role hardcodes this to be the cephfs_data_pool. This should be configurable.

Bug 4: The ceph-dashboard role does not configure the GANESHA_CLUSTERS_RADOS_POOL_NAMESPACE setting, and so even if the correct RADOS objects have been created, the dashboard does not know where to find them.

@dsavineau I'd be willing to have a go at fixing this if that's OK? I'll try and get a PR in over the weekend.

@dsavineau
Copy link
Contributor

@KingJ You don't need to ask the permision. Go for it ! 👍

@stale
Copy link

stale bot commented Nov 7, 2019

This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.

@stale stale bot added the wontfix label Nov 7, 2019
@dpaclt
Copy link

dpaclt commented Nov 12, 2019

Hi Team

We do face same issue .Documentation is not clear on enabling NFS in ceph

@stale stale bot removed the wontfix label Nov 12, 2019
KingJ added a commit to KingJ/ceph-ansible that referenced this issue Nov 30, 2019
KingJ added a commit to KingJ/ceph-ansible that referenced this issue Nov 30, 2019
KingJ added a commit to KingJ/ceph-ansible that referenced this issue Nov 30, 2019
…Ganesha at it (ceph#4467)

Signed-off-by: KingJ <kj@kingj.net>
@eugenkoenig
Copy link

Is there a quick fix on command line or something available to create an NFS cluster? Because currently it's not usable and I just wanted to give a try with Ceph NFS, but facing the same issue...

@JohnnyElvis
Copy link

Hi Guys, I'm having the same issues with "15.2.1 (9fd2f65f91d9246fae2c841a6222d34d121680ee) octopus (stable)" ... some advice on a workaround would be great! Otherwise I might need to drop Ceph from our candidates list :(

@darkstar
Copy link

It would be cool if someone could take another look at this. I'm also looking for exporting cephfs via ganesha, and while I got the services up and running, the dashboard still shows no NFS page

@arkanmgerges
Copy link

Hi I'm also facing the same issue
Screenshot 2020-09-16 at 15 21 59

@arkanmgerges
Copy link

arkanmgerges commented Sep 16, 2020

I've solved this issue by reading here http://knowledgebase.45drives.com/kb/kb450159-cephfs-nfsganesha/
I've applied
touch conf-nfs1 ; rados -p cephfs_data -N ganesha-export-index put conf-nfs1 conf-nfs1
ceph dashboard set-ganesha-clusters-rados-pool-namespace data/ganesha-export-index
ceph mgr module disable dashboard ; ceph mgr module enable dashboard

Then create the nfs place
Screenshot 2020-09-16 at 19 45 03

Then test it in my linux where I have ceph installed:
root@controller1:~# ceph mon stat e1: 1 mons at {controller1=[v2:192.168.50.10:3300/0,v1:192.168.50.10:6789/0]}, election epoch 121, leader 0 controller1, quorum 0 controller1

Copy the admin secret key (you can use other client key that has access)
root@controller1:~# ceph auth get-key client.admin > /etc/ceph/admin.secret
chmod 600 /etc/ceph/admin.secret

root@controller1:~# mkdir ceph_nfs_test
root@controller1:~# mount -t ceph 192.168.50.10:/exports ceph_nfs_test/ -o name=admin,secretfile=/etc/ceph/admin.secret

Then I can see it mounted:

root@controller1:~# mount -l | grep ceph
192.168.50.10:/exports on /root/ceph_nfs_test type ceph (rw,relatime,name=admin,secret=<hidden>,acl,wsize=33554432)
root@controller1:~# touch ceph_nfs_test/hello_world

Create a bucket from the dashboard, "Object Gatway"--> Buckets
Give a name and choose the owner to be 'cephnfs'

Then from your node (where you have nfs-ganesha is running)

root@controller1:~# mount -t nfs localhost:/cephobject aaa
root@controller1:~# mount -l | grep aaa
localhost:/cephobject on /root/aaa type nfs4 (rw,relatime,vers=4.2,rsize=1048576,wsize=1048576,namlen=255,hard,proto=tcp6,timeo=600,retrans=2,sec=sys,clientaddr=::1,local_lock=none,addr=::1)
root@controller1:~#
root@controller1:~#
root@controller1:~# ls aaa
coral.identity
root@controller1:~#

Above my coral.identity is a bucket. And cephobject can be found in /etc/ganesha/ganesha.conf under EXPORT
...
Pseudo = /cephobject;
...

I tested the mount from external server also and it worked (I tested from aws). My local server is at home

@renich
Copy link

renich commented Aug 4, 2021

Ping.

Another interested one.

@github-actions
Copy link

This issue has been automatically marked as stale because it has not had recent activity. It will be closed in a week if no further activity occurs. Thank you for your contributions.

@github-actions
Copy link

This issue has been automatically closed due to inactivity. Please re-open if this still requires investigation.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

10 participants