Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Error: "Disk 'rbd.esx' is not defined to the configuration" on "info" and "delete" commands #87

Closed
Vascko opened this issue May 24, 2019 · 2 comments · Fixed by #89
Closed

Comments

@Vascko
Copy link

Vascko commented May 24, 2019

Hey guys,

thanks a ton for all your work on this fantastic project. I do love almost everything about how the iSCSI GW and it's API is designed.
The only issue I could find which kinda relates to my problem is in the old ceph-iscsi-cli project under ceph/ceph-iscsi-cli#108

OS/Kernel

[root@ceph-osd01 ~]# uname -a
Linux ceph-osd01 5.0.16-300.fc30.x86_64 #1 SMP Tue May 14 19:33:09 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux

TCMU Runner

[root@ceph-osd01 ~]# tcmu-runner -V
tcmu-runner 1.4.0

NOTE: I was actually pulling my hair out a bit over this one because I didn't recognize that the tcmu-runner version that comes with Fedora 30 is still v1.1.3 which first was supplied with Fedora 26. Is that because of licensing changes for tcmu-runner?
So then I compiled from master to benefit from commits targeting performance and ESXi fixes and tcmu-runner should be v1.4.1+ but I used the ./make_runnerrpms.sh --without glfs --without qcow --without zbc --without fbo script which calls git describe --tags and that only returns v1.4.0. Not sure why it doesn't return the v1.4.1 tag.
The first time I ran into this, I compiled without any exclusion parameters with the same issue.

RTS Lib

[root@ceph-osd01 ~]# pip3 list | grep rts
rtslib-fb       2.1.69

Ceph iSCSI

[root@ceph-osd01 ~]# pip3 list | grep ceph-iscsi
ceph-iscsi      3.0

ceph-iscsi is actually installed from recent master to benefit from post 3.0 fixes

Target CLI (this is not used in ceph-iscsi 3.0+ anymore right? Can I drop this from the requirements)

[root@ceph-osd01 ~]# targetcli -v
/usr/bin/targetcli version 2.1.fb49

After creating an image in gwcli under /disks, issuing commands that require a image ID, like info <image_id> or delete <image_id> error out with Disk 'rbd.esx' is not defined to the configuration and disk name provided does not exist

Here is the output from gwcli

/> ls
o- / .................................................................... [...]
  o- cluster .................................................... [Clusters: 1]
  | o- ceph ....................................................... [HEALTH_OK]
  |   o- pools ..................................................... [Pools: 2]
  |   | o- .rgw.root ...... [(x3), Commit: 0.00Y/3710570752K (0%), Used: 0.00Y]
  |   | o- rbd ................ [(x3), Commit: 1G/3710570752K (0%), Used: 768K]
  |   o- topology ........................................... [OSDs: 3,MONs: 3]
  o- disks ..................................................... [1G, Disks: 1]
  | o- rbd ......................................................... [rbd (1G)]
  |   o- esx ................................................... [rbd/esx (1G)]
  o- iscsi-targets .......................... [DiscoveryAuth: None, Targets: 0]
/disks/rbd/esx> info
Image                 .. esx
Ceph Cluster          .. ceph
Pool                  .. rbd
Wwn                   .. 225c7825-0b5d-4454-9a59-fe5043f150a3
Size H                .. 1G
Feature List          .. RBD_FEATURE_LAYERING
                         RBD_FEATURE_EXCLUSIVE_LOCK
                         RBD_FEATURE_OBJECT_MAP
                         RBD_FEATURE_FAST_DIFF
                         RBD_FEATURE_DEEP_FLATTEN
Snapshots             ..
Owner                 ..
Backstore             .. user:rbd
Backstore Object Name .. rbd.esx
Control Values
- hw_max_sectors .. 1024
- max_data_area_mb .. 8
- osd_op_timeout .. 30
- qfull_timeout .. 5
/disks> info rbd.esx
CMD: /disks/ info rbd.esx
disk name provided does not exist
/disks> delete rbd.esx
Disk 'rbd.esx' is not defined to the configuration

Here is the output from rbd info esx

[root@ceph-osd01 ~]# rbd info esx
rbd image 'esx':
	size 1 GiB in 1024 objects
	order 20 (1 MiB objects)
	snapshot_count: 0
	id: 25a1abe28e9f6
	block_name_prefix: rbd_data.25a1abe28e9f6
	format: 2
	features: layering, exclusive-lock, object-map, fast-diff, deep-flatten
	op_features:
	flags:
	create_timestamp: Thu May 23 12:23:51 2019
	access_timestamp: Thu May 23 12:23:51 2019
	modify_timestamp: Thu May 23 12:23:51 2019

I tried both the gwcli help text suggested format for the image id (rbd.esx) and the ID displayed by rbd info esx which both result in the same errors.

I know that I haven't defined any gateways or targets yet. I scrapped my old config in order to start clean and see if the issue still happens when I try these operations without any other config items like gateways or targets configured. It is the same behavior.

Thanks to anyone who takes a stab at this!

@Vascko
Copy link
Author

Vascko commented May 24, 2019

Update: It might seem that this could be a documentation/help issue. It now worked when I switched from rbd.esx to rbd/esx but the help text is listing . as the format to use

/disks> help delete

SYNTAX
======
delete image_id


DESCRIPTION
===========

Delete a given rbd image from the configuration and ceph. This is a
destructive action that could lead to data loss, so please ensure
the rbd image name is correct!

> delete <disk_name>
e.g.
> delete rbd.disk_1

"disk_name" refers to the name of the disk as shown in the UI, for
example rbd.disk_1.

Also note that the delete process is a synchronous task, so the larger
the rbd image is, the longer the delete will take to run.

I discovered this by accident when adding an image to a client

/iscsi-target...xi01-4316bcc8> disk add rbd.esx
CMD: ../hosts/<client_iqn> disk action=add disk=rbd.esx
Invalid format. Use pool_name/disk_name

The help text for adding disk operations on clients is listing the correct format

/iscsi-target...xi01-4316bcc8> help disk

SYNTAX
======
disk [action] [disk] [size]


DEFAULT VALUES
==============
action=add


DESCRIPTION
===========

Disks can be added or removed from the client one at a time using
the 'disk' sub-command. Note that if the disk does not currently exist
in the configuration, the cli will attempt to create it for you.

e.g.
disk add <pool_name/image_name> <size>
disk remove <pool_name/image_name>

Adding a disk will result in the disk occupying the client's next
available lun id. Once allocated removing a LUN will not change the
LUN id associations for the client.

Note that if the client is a member of a host group, disk management
*must* be performed at the group level. Attempting to add/remove disks
at the client level will fail.

I'll leave this open as a tracker for a documentation update but feel free to close as you see fit.

ricardoasmarques added a commit to ricardoasmarques/ceph-iscsi that referenced this issue May 24, 2019
We are now using the format <pool>/<image>
instead of <pool>.<image>

Fixes: ceph#87

Signed-off-by: Ricardo Marques <rimarques@suse.com>
@ricardoasmarques
Copy link
Contributor

@Vascko Yes, we are now using a / separator instead of . to support managing images that contains . in the name.

I've created a PR to fix the documentation accordingly: #89

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging a pull request may close this issue.

2 participants