Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

During subscribing notification on characteristic - Error 6 (0x6) GATT REQ NOT SUPPORTED (IDFGH-12790) #13768

Open
3 tasks done
veneno-529 opened this issue May 9, 2024 · 5 comments
Assignees
Labels
Status: In Progress Work is in progress

Comments

@veneno-529
Copy link

Answers checklist.

  • I have read the documentation ESP-IDF Programming Guide and the issue is not addressed there.
  • I have updated my IDF branch (master or release) to the latest version and checked that the issue is present there.
  • I have searched the issue tracker for a similar issue and not found a similar issue.

General issue report

Hello,

I'm using ESP - IDF 5.1.1 and NIMBLE. I have recently added notify to one my characteristics. Normally when I erase the esp32 and flash the code at first I'm able to successfully subscribe and unsubscribe notifications. Once the notification is subscribed it stays subscribed till the device disconnects. But in after certain hours when I try to subscribe the notification it gives me Error 6 (0x6) GATT REQ NOT SUPPORTED. I am trying to do this by NRF app. It subscribes and automatically unsubscribes too in case of auto connect.
PFB code below in which I have implemented the notify part. Also screenshot of the error and some logs of the device.
esp log
nrf ss

under gapp events function below event has been added for notifications

case BLE_GAP_EVENT_SUBSCRIBE: f_pairing = FALSE; f_ble_paired = TRUE; //vTaskResume(pirStatusHandle); MODLOG_DFLT(INFO, "subscribe event; conn_handle=%d attr_handle=%d " "reason=%d prevn=%d curn=%d previ=%d curi=%d\n", event->subscribe.conn_handle, event->subscribe.attr_handle, event->subscribe.reason, event->subscribe.prev_notify, event->subscribe.cur_notify, event->subscribe.prev_indicate, event->subscribe.cur_indicate); if (event->subscribe.attr_handle == notification_handle) { printf("\nSubscribed with notification_handle =%d\n", event->subscribe.attr_handle); notify_state = event->subscribe.cur_notify; //!! As the client is now subscribed to notifications, the value is set to 1 printf("notify_state=%d\n", notify_state); } return 0;

below is the gatt svcs code
const struct ble_gatt_svc_def gatt_svcs[] = { {.type = BLE_GATT_SVC_TYPE_PRIMARY, .uuid = &gatt_svr_svc_uuid.u, // Define UUID for device type .characteristics = (struct ble_gatt_chr_def[]){ {.uuid = &gatt_svr_chartx_rx_uuid.u, // Define UUID for reading from esp32 and writing to esp32 .flags = BLE_GATT_CHR_F_READ | BLE_GATT_CHR_F_WRITE | BLE_GATT_CHR_F_READ_ENC | BLE_GATT_CHR_F_WRITE_ENC, .access_cb = device_char_read_write}, { .uuid = &gatt_svr_strtx_rx_uuid.u, .val_handle = &notification_handle, .flags = BLE_GATT_CHR_F_READ | BLE_GATT_CHR_F_WRITE | BLE_GATT_CHR_F_WRITE_ENC | BLE_GATT_CHR_F_NOTIFY, .access_cb = device_string_read_write}, {0}}}, {0}};

The error is occuring randomly and once it occurs I'm not able to subscribe to notifications until I erase the esp32 and again reflash it.
I have configured nimble for below settings as follows -

  1. Maximum number of concurrent connects = 1
  2. Maximum number of bonds to save across boots = 1
  3. Maximum number of CCC descriptors to save across reboots = 12
  4. Host based privacy for random address = enabled
  5. BLE MAX connections = 3

Additionally, when one event occurs I'm unpairing the device using below function which unpairs the device.
struct ble_hs_dev_records * peer_dev_records = ble_rpa_get_peer_dev_records(); ble_gap_unpair(&peer_dev_records[0].peer_sec.peer_addr);

@espressif-bot espressif-bot added the Status: Opened Issue is new label May 9, 2024
@github-actions github-actions bot changed the title During subscribing notification on characteristic - Error 6 (0x6) GATT REQ NOT SUPPORTED During subscribing notification on characteristic - Error 6 (0x6) GATT REQ NOT SUPPORTED (IDFGH-12790) May 9, 2024
@veneno-529
Copy link
Author

Hello @rahult-github
PFB debug level logs when i try to subscribe to notification.

I (62286) NimBLE: subscribe event; conn_handle=0 attr_handle=27 reason=1 prevn=0 curn=1 previ=0 curi=0
Subscribed with notification_handle =27
notify_state=1
D (62301) NimBLE: error persisting cccd; too many entries (12)

D (62307) NimBLE: looking up our sec;
D (62310) NimBLE:

D (62313) NimBLE: looking up our sec;
D (62316) NimBLE:

D (62319) NimBLE: host tx hci data; handle=0 length=9

D (62324) NimBLE: ble_hs_hci_acl_tx():
D (62328) NimBLE: 0x00
D (62330) NimBLE: 0x00
D (62333) NimBLE: 0x09
D (62335) NimBLE: 0x00
D (62338) NimBLE: 0x05
D (62340) NimBLE: 0x00
D (62343) NimBLE: 0x04
D (62345) NimBLE: 0x00
D (62348) NimBLE: 0x01
D (62351) NimBLE: 0x12
D (62353) NimBLE: 0x1c
D (62356) NimBLE: 0x00
D (62358) NimBLE: 0x06
D (62361) NimBLE:

D (62508) NimBLE: ble_hs_hci_evt_acl_process(): conn_handle=0 pb=2 len=7 data=
D (62509) NimBLE: 0x03
D (62509) NimBLE: 0x00
D (62510) NimBLE: 0x04
D (62512) NimBLE: 0x00
D (62515) NimBLE: 0x0a
D (62517) NimBLE: 0x19
D (62520) NimBLE: 0x00
D (62522) NimBLE:

D (62525) NimBLE: host tx hci data; handle=0 length=6

D (62530) NimBLE: ble_hs_hci_acl_tx():
D (62534) NimBLE: 0x00
D (62536) NimBLE: 0x00
D (62539) NimBLE: 0x06
D (62541) NimBLE: 0x00
D (62544) NimBLE: 0x02
D (62546) NimBLE: 0x00
D (62549) NimBLE: 0x04
D (62552) NimBLE: 0x00
D (62554) NimBLE: 0x0b
D (62556) NimBLE: 0x63
D (62559) NimBLE:

D (62654) NimBLE: ble_hs_hci_evt_acl_process(): conn_handle=0 pb=2 len=9 data=
D (62655) NimBLE: 0x05
D (62655) NimBLE: 0x00
D (62656) NimBLE: 0x04
D (62658) NimBLE: 0x00
D (62661) NimBLE: 0x12
D (62663) NimBLE: 0x1c
D (62666) NimBLE: 0x00
D (62669) NimBLE: 0x00
D (62671) NimBLE: 0x00
D (62674) NimBLE:

I (62676) NimBLE: subscribe event; conn_handle=0 attr_handle=27 reason=1 prevn=1 curn=0 previ=0 curi=0

Subscribed with notification_handle =27
notify_state=0

Please help me out with it.

@rahult-github
Copy link
Collaborator

error persisting cccd; too many entries (12) ..

looks like too many cccds are getting written . Please increase the value of BT_NIMBLE_MAX_CCCDS from menuconfig and try once.

@veneno-529
Copy link
Author

veneno-529 commented May 21, 2024

@rahult-github this will only temporarily resolve the issue. I have already tried.
After sometime, again it will occur. It seems that the oldest peer info is not getting deleted.

I have found a similar issue here =

I have seen that patch provided in this issue is not matching with the current files of the nimble version.
I have implemented the patch but still not able to resolve the issue.

I was going through the stack of nimble in order to figure out from where exactly this error was coming.
I found out that since I was using RPA enabled host. The below function would never return BLE_HS_ESTORE_CAP.

pic

Correct me if I'm wrong the would never trigger the overflow event. As per my understanding when ble_store_write funtion is executed and this function checks for BLE_HS_ESTORE_CAP (Error), if error is found ble_store_overflow_event occurs to free up space. But since the function in image never returns the error it may never free up space. I'm using round robin fashion so ideally it should delete the oldest peer cccd's info which is not happening. This is as far as I could go i got stuck when sysinit is being called and how the functions are ensuring they are in sys init stage. Please help so that we can resolve this issue.

@rahult-github
Copy link
Collaborator

Hi @veneno-529 ,

Can you please:

  1. Try attached patch and see if it helps. The patch is to be applied in $IDF_PATH/components/bt/host/nimble/nimble path.
  2. Please share steps on how your are reproducing the issue, which can be tried by us to reproduce issue , in case above patch is not helping the case.
    check_for_size.txt

@espressif-bot espressif-bot added Status: In Progress Work is in progress and removed Status: Opened Issue is new labels May 22, 2024
@veneno-529
Copy link
Author

Hello @rahult-github

So first I tried the patch still no luck. Issue is occurring. To regenerate the issue please follow below steps.

Configure nimble for below settings as follows -

Maximum number of concurrent connects = 1
Maximum number of bonds to save across boots = 3
Maximum number of CCC descriptors to save across reboots = 4
Host based privacy for random address = enabled
BLE MAX connections = 3

ble_hs_cfg.store_status_cb = ble_store_util_status_rr;
RPA is enabled exactly how it is mentioned in examples.

First erase flash completely to ensure memory is clean and no previous data of connections is present.
Burn the firmware file.

  1. Connect android device, pair it, subscribe to notifications.
  2. Disconnect android device.
  3. Restart ESP32, you get log of RL count = 1
  4. Now, connect iphone device or any other android device i did it with iphone, subscribe to notifications.
  5. Disconnect device.
  6. Restart ESP32, you get log of RL count = 2 meaning it has stored info of both the devices.
  7. Try to connect iphone again. It works fine. Disconnect it.
  8. Restart the esp32 you will see that the RL count has changed to 1. It deleted info of android device.
  9. Now connect the android device again, it fails at first. Removes ios device info and post disconnecting and connecting again it stores the android device info. Ios/or any other device's info is deleted.

Now in the logs I'm clearing getting error persisting cccd. Did not remove peer device. Removing peer device from index 0.
So, this is how im able to regenerate but here I observer another two things.

  1. It stores info of devices of only two devices when max ble restore and device count is 3.
  2. When error persisting cccd error triggers, after this error it is only able to store info of only one device.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Status: In Progress Work is in progress
Projects
None yet
Development

No branches or pull requests

3 participants