Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

nvme not showing up #188

Closed
jehigh03 opened this issue Jun 25, 2022 · 18 comments
Closed

nvme not showing up #188

jehigh03 opened this issue Jun 25, 2022 · 18 comments
Assignees
Labels
investigating Investigating/need more info

Comments

@jehigh03
Copy link

jehigh03 commented Jun 25, 2022

I'm creating an unRAID server for a friend, and just installed the disk location plugin. I just noticed the newest version of Disk Location does not show nvme drives. I do a force scan all, and it detects the nvme drives, but I cannot assign them to any tray.

How can I assign an nvme drive to a tray?

Thank you!

@olehj
Copy link
Owner

olehj commented Mar 25, 2023

Should be as assigning any other drives. Hard to say what went wrong there, I have 6 nvme drives myself successfully assigned.

Give me the output of this:
"lsscsi -u -g"

and check if the drive is recognized by this command, I don't need the output it, but verify if the drive is actually there:
"lsscsi"

@olehj olehj added the investigating Investigating/need more info label Mar 25, 2023
@marcoyangyf
Copy link

Should be as assigning any other drives. Hard to say what went wrong there, I have 6 nvme drives myself successfully assigned.

Give me the output of this: "lsscsi -u -g"

and check if the drive is recognized by this command, I don't need the output it, but verify if the drive is actually there: "lsscsi"

First I really appreciate your build! It's very handy!

I have the same issue here, where the NVME drives can't be recognized by your plug-in. And I have the output here:
image

I am currently running UNRAID 6.12.0-RC2 with 2 identical Gloway Basic 256G NVME ssd cache.

BTW I want to mention that the 2 NVME was not able to be recognized by UNRAID at the same time before UNRAID 6.12.0-RC2 maybe because of the global duplicate IDs. I posted this problem on UNRAID forums and supports replied me to update to this UNRAID 6.12.0-RC2 version which solved the recognize difficulties in UNRAID. I am not a professional but do you think disk location may have the same global duplicate ID issue here?

The thread I posted on UNRAID forum: https://forums.unraid.net/topic/137442-solvednew-nvme-not-visible-globally-duplicate-ids/#comment-1248613

@marcoyangyf
Copy link

plus mine "force scan" will also display the nvme drive correctly
image

@olehj
Copy link
Owner

olehj commented Apr 17, 2023

@marcoyangyf I am rather interested in the formatting of that long drive string you get.

If you don't want to reveal the actual string it outputs, can you make one similar by only replacing characters consisting of a-z|A-Z|0-9? I need the length and eventually special characters..

@olehj
Copy link
Owner

olehj commented Apr 17, 2023

And please, in text format, no screenshots ;)

@olehj olehj self-assigned this Apr 17, 2023
@marcoyangyf
Copy link

@olehj Thank you for your reply!
here are the copy of my outputs:

(root@192.168.4.112) Password:
Last login: Tue Apr 18 00:13:04 2023 from 192.168.4.166
Linux 6.1.20-Unraid.
root@NorthernLight3s:# lsscsi
[0:0:0:0] disk SanDisk' Cruzer Fit 1.00 /dev/sda
[3:0:0:0] disk ATA ST4000NC001-1FS1 CN03 /dev/sdd
[4:0:0:0] disk ATA ST4000NC001-1FS1 CN03 /dev/sde
[5:0:0:0] disk ATA HUH728080ALE601 0001 /dev/sdf
[6:0:0:0] disk ATA WDC WD80EMAZ-00W 0A83 /dev/sdg
[7:0:0:0] disk ATA HGST HUS728T8TAL W414 /dev/sdh
[8:0:0:0] disk ATA HGST HUH728080AL T7JD /dev/sdi
[9:0:0:0] disk ATA WDC WUH721414AL W240 /dev/sdb
[9:0:1:0] disk ATA TOSHIBA MG07ACA1 0103 /dev/sdc
[N:0:0:1] disk GLOWAY Basic256GNVMe-M.2/80__1 /dev/nvme0n1
[N:1:0:1] disk GLOWAY Basic256GNVMe-M.2/80__1 /dev/nvme1n1
root@NorthernLight3s:
# lsscsi -u -g
[0:0:0:0] disk none /dev/sda /dev/sg0
[3:0:0:0] disk 5000c5007a8776ab /dev/sdd /dev/sg3
[4:0:0:0] disk 5000c5007a9abb67 /dev/sde /dev/sg4
[5:0:0:0] disk 5000cca254ea77ca /dev/sdf /dev/sg5
[6:0:0:0] disk 5000cca27dc1a172 /dev/sdg /dev/sg6
[7:0:0:0] disk 5000cca0bbd8d3f6 /dev/sdh /dev/sg7
[8:0:0:0] disk 5000cca254d324ff /dev/sdi /dev/sg8
[9:0:0:0] disk 5000cca290c94664 /dev/sdb /dev/sg1
[9:0:1:0] disk 5000039ab8d92c46 /dev/sdc /dev/sg2
[N:0:0:1] disk nvme.1dbe-323032323034323130323435-474c4f574159204261736963323536474e564d652d4d2e322f3830-00000001 /dev/nvme0n1 -
[N:1:0:1] disk nvme.1dbe-323032323034323130323132-474c4f574159204261736963323536474e564d652d4d2e322f3830-00000001 /dev/nvme1n1 -
root@NorthernLight3s:~#

also the screenshot
image

@olehj
Copy link
Owner

olehj commented Apr 17, 2023

And here I give you an old command that was in use back in the days... how's the output with:
lsscsi -b -g

Sorry for the confusion, it was just stuck in my brain

@olehj
Copy link
Owner

olehj commented Apr 17, 2023

And what about this, does this output anything useful?
smartctl -x --json /dev/nvme0n1

@marcoyangyf
Copy link

@olehj
image
Microsoft Windows [Version 10.0.22621.1555]
(c) Microsoft Corporation. All rights reserved.

C:\Users\10155>ssh root@192.168.4.112
(root@192.168.4.112) Password:
Last login: Tue Apr 18 00:13:53 2023 from 192.168.4.166
Linux 6.1.20-Unraid.
root@NorthernLight3s:# lsscsi -b -g
[0:0:0:0] /dev/sda /dev/sg0
[3:0:0:0] /dev/sdd /dev/sg3
[4:0:0:0] /dev/sde /dev/sg4
[5:0:0:0] /dev/sdf /dev/sg5
[6:0:0:0] /dev/sdg /dev/sg6
[7:0:0:0] /dev/sdh /dev/sg7
[8:0:0:0] /dev/sdi /dev/sg8
[9:0:0:0] /dev/sdb /dev/sg1
[9:0:1:0] /dev/sdc /dev/sg2
[N:0:0:1] /dev/nvme0n1 -
[N:1:0:1] /dev/nvme1n1 -
root@NorthernLight3s:
#

I guess this is the problem, these NVMEs do not have a valid "name"?

@olehj
Copy link
Owner

olehj commented Apr 17, 2023

No that's fine and taken care of, the problem is later in the chain somewhere

@marcoyangyf
Copy link

I don't know why my posts has been cross-lined, sorry for the inconvenience.

This is the output of

smartctl -x --json /dev/nvme0n1

root@NorthernLight3s:~# smartctl -x --json /dev/nvme0n1
{
"json_format_version": [
1,
0
],
"smartctl": {
"version": [
7,
3
],
"svn_revision": "5338",
"platform_info": "x86_64-linux-6.1.20-Unraid",
"build_info": "(local build)",
"argv": [
"smartctl",
"-x",
"--json",
"/dev/nvme0n1"
],
"exit_status": 0
},
"local_time": {
"time_t": 1681749529,
"asctime": "Tue Apr 18 00:38:49 2023 CST"
},
"device": {
"name": "/dev/nvme0n1",
"info_name": "/dev/nvme0n1",
"type": "nvme",
"protocol": "NVMe"
},
"model_name": "GLOWAY Basic256GNVMe-M.2/80",
"serial_number": "202204210245",
"firmware_version": "2.1.0.16",
"nvme_pci_vendor": {
"id": 7614,
"subsystem_id": 21014
},
"nvme_ieee_oui_identifier": 1193046,
"nvme_total_capacity": 256060514304,
"nvme_unallocated_capacity": 0,
"nvme_controller_id": 0,
"nvme_version": {
"string": "1.4",
"value": 66560
},
"nvme_number_of_namespaces": 1,
"nvme_namespaces": [
{
"id": 1,
"size": {
"blocks": 500118192,
"bytes": 256060514304
},
"capacity": {
"blocks": 500118192,
"bytes": 256060514304
},
"utilization": {
"blocks": 500118192,
"bytes": 256060514304
},
"formatted_lba_size": 512
}
],
"user_capacity": {
"blocks": 500118192,
"bytes": 256060514304
},
"logical_block_size": 512,
"smart_support": {
"available": true,
"enabled": true
},
"smart_status": {
"passed": true,
"nvme": {
"value": 0
}
},
"nvme_smart_health_information_log": {
"critical_warning": 0,
"temperature": 50,
"available_spare": 100,
"available_spare_threshold": 10,
"percentage_used": 7,
"data_units_read": 2531777,
"data_units_written": 3951227,
"host_reads": 11391637,
"host_writes": 23423143,
"controller_busy_time": 0,
"power_cycles": 64,
"power_on_hours": 6925,
"unsafe_shutdowns": 1,
"media_errors": 0,
"num_err_log_entries": 0,
"warning_temp_time": 0,
"critical_comp_time": 0,
"temperature_sensors": [
51,
0,
58
]
},
"temperature": {
"current": 50
},
"power_cycle_count": 64,
"power_on_time": {
"hours": 6925
}
}
free(): invalid pointer
Aborted

@olehj
Copy link
Owner

olehj commented Apr 17, 2023

Based on this data, it should really just work. But there's something happening at the end there that I have never seen before, which I can't reproduce myself:

free(): invalid pointer Aborted

If those lines are dragged into the json_decode in PHP, it might cause a parse error.

Can you try this command instead and check if you get these lines with it:
smartctl -x --json --quietmode=silent /dev/nvme0n1

@marcoyangyf
Copy link

Thank you so much for continuously help me solve this problem.
Here is the out come of > smartctl -x --json --quietmode=silent /dev/nvme0n1

{
"json_format_version": [
1,
0
],
"smartctl": {
"version": [
7,
3
],
"svn_revision": "5338",
"platform_info": "x86_64-linux-6.1.20-Unraid",
"build_info": "(local build)",
"argv": [
"smartctl",
"-x",
"--json",
"--quietmode=silent",
"/dev/nvme0n1"
],
"exit_status": 4
},
"local_time": {
"time_t": 1681751691,
"asctime": "Tue Apr 18 01:14:51 2023 CST"
},
"device": {
"name": "/dev/nvme0n1",
"info_name": "/dev/nvme0n1",
"type": "nvme",
"protocol": "NVMe"
},
"model_name": "GLOWAY Basic256GNVMe-M.2/80",
"serial_number": "202204210245",
"firmware_version": "2.1.0.16",
"nvme_pci_vendor": {
"id": 7614,
"subsystem_id": 21014
},
"nvme_ieee_oui_identifier": 1193046,
"nvme_total_capacity": 256060514304,
"nvme_unallocated_capacity": 0,
"nvme_controller_id": 0,
"nvme_version": {
"string": "1.4",
"value": 66560
},
"nvme_number_of_namespaces": 1,
"nvme_namespaces": [
{
"id": 1,
"size": {
"blocks": 500118192,
"bytes": 256060514304
},
"capacity": {
"blocks": 500118192,
"bytes": 256060514304
},
"utilization": {
"blocks": 500118192,
"bytes": 256060514304
},
"formatted_lba_size": 512
}
],
"user_capacity": {
"blocks": 500118192,
"bytes": 256060514304
},
"logical_block_size": 512,
"smart_support": {
"available": true,
"enabled": true
},
"smart_status": {
"passed": true,
"nvme": {
"value": 0
}
},
"nvme_smart_health_information_log": {
"critical_warning": 0,
"temperature": 50,
"available_spare": 100,
"available_spare_threshold": 10,
"percentage_used": 7,
"data_units_read": 2531777,
"data_units_written": 3951227,
"host_reads": 11391637,
"host_writes": 23423143,
"controller_busy_time": 0,
"power_cycles": 64,
"power_on_hours": 6926,
"unsafe_shutdowns": 1,
"media_errors": 0,
"num_err_log_entries": 0,
"warning_temp_time": 0,
"critical_comp_time": 0,
"temperature_sensors": [
51,
0,
58
]
},
"temperature": {
"current": 50
},
"power_cycle_count": 64,
"power_on_time": {
"hours": 6926
}
}

@olehj
Copy link
Owner

olehj commented Apr 17, 2023

Alright, I'll try to add that flag into the script and see if that helps

@olehj
Copy link
Owner

olehj commented Apr 17, 2023

So, if you wanna test this before I pull this to stable, that would be great.

You must first uninstall "Disk Location", then go to APPS and install the "Disk Location - Developer Edition" instead. Do a "force scan all" if it's not available right away.

Settings should remain safe.

If it's confirmed working, I'll push this through.

@marcoyangyf
Copy link

@olehj
There we go! I got my nvme ssds right away without "force scan all"!
image

@marcoyangyf
Copy link

Thank you so much!

@olehj
Copy link
Owner

olehj commented Apr 17, 2023

Alright, awesome!

Then you can delete the developer edition and reinstall the regular version when I pull this stable now soon.

@olehj olehj closed this as completed in d390d76 Apr 17, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
investigating Investigating/need more info
Projects
None yet
Development

No branches or pull requests

3 participants