-
Notifications
You must be signed in to change notification settings - Fork 384
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
NimBLE/Host: Feature Multi Handle Value Notification #1426
base: master
Are you sure you want to change the base?
NimBLE/Host: Feature Multi Handle Value Notification #1426
Conversation
3836d5c
to
c3d05e8
Compare
Style check summaryNo suggestions at this time! |
c3d05e8
to
15a8238
Compare
Hi @sjanc, |
0f0e2c8
to
ffeda8c
Compare
oh, and it would be nice if you could also add some unit tests for those (see nimble/host/test/) |
#AutoPTS run mynewt GATT/SR/GAN |
ffeda8c
to
456d157
Compare
@@ -79,6 +79,12 @@ struct ble_hs_cfg; | |||
#define BLE_GATT_SVC_TYPE_PRIMARY 1 | |||
#define BLE_GATT_SVC_TYPE_SECONDARY 2 | |||
|
|||
/*** @server. */ | |||
struct ble_gatt_hv { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
hv
isn't very clear, maybe
struct ble_gatt_hv { | |
struct ble_gatt_mult_notif_entry { |
or notif_value
as @sjanc suggested
/*** @server. */ | ||
struct ble_gatt_hv { | ||
uint16_t handle; | ||
struct os_mbuf* value; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
codestyle
struct os_mbuf* value; | |
struct os_mbuf *value; |
* @return 0 on success; nonzero on failure. | ||
*/ | ||
int ble_gatts_multi_notify_custom(uint16_t conn_handle, | ||
struct ble_gatt_hv * tuples, uint16_t num_tuples); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
adjust name after change and codestyle
struct ble_gatt_hv * tuples, uint16_t num_tuples); | |
struct ble_gatt_mult_notif_entry *tuples, uint16_t num_tuples); |
} | ||
rc = ble_att_svr_read_handle(BLE_HS_CONN_HANDLE_NONE, | ||
tuples[i].handle, 0, tuples[i].value, NULL); | ||
if (rc != 0) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is neither fatal error nor only caused by missing permissions (permissions are not checked for self-reads) Maybe just log error
if (rc != 0) { | |
BLE_HS_LOG(ERROR, "Attribute read failed (err=0x%02x), rc"); |
and return rc as is: for example handle
was invalid, and BLE_HS_EAPP
doesn't make sense then
for (i = 0; i < num_tuples; i++) { | ||
pdu_size += (2 * sizeof(uint16_t)) + OS_MBUF_PKTLEN(tuples[i].value); | ||
if (pdu_size > mtu) { | ||
/* The notification will need to be split */ | ||
if (i == split_at) { | ||
/* Single notification too large for server, | ||
* cannot send notify without truncating. | ||
*/ | ||
rc = BLE_HS_ENOMEM; | ||
goto done; | ||
} | ||
split_at = i; | ||
/* Reinitialize loop */ | ||
i--; | ||
pdu_size = sizeof(uint8_t); | ||
} | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
for (i = 0; i < num_tuples; i++) { | |
pdu_size += (2 * sizeof(uint16_t)) + OS_MBUF_PKTLEN(tuples[i].value); | |
if (pdu_size > mtu) { | |
/* The notification will need to be split */ | |
if (i == split_at) { | |
/* Single notification too large for server, | |
* cannot send notify without truncating. | |
*/ | |
rc = BLE_HS_ENOMEM; | |
goto done; | |
} | |
split_at = i; | |
/* Reinitialize loop */ | |
i--; | |
pdu_size = sizeof(uint8_t); | |
} | |
} |
I think these two loops can be merged (or this one removed and second modified)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Spec says we cannot truncate multi-notification, and if it is greater than mtu, it needs to be split into smaller multi-notifications.
The second loop sends a multi-notification in each iteration that is less than the mtu.
If the multi-notification contains a notification that itself is too large for the mtu, it cannot be split and we cannot send the entire pdu and the operation needs to be aborted.
this "too-large" notification can be anywhere inside the multi-notification. Hence, we cannot risk splitting and sending a few multi-notifications only to find that the next one has a "too-large" notification and the operation needs to be aborted, cause at this point the client has received the notification that have been sent.
That is why the first loop is needed to check if there isn't any "too-large" single notifications.
I agree this is a very corner case and the loops look redundant, I will see what I can do about the two loops.
continue; | ||
} | ||
/* Handle */ | ||
rc = os_mbuf_copyinto(om, OS_MBUF_PKTLEN(om), |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
you are copying data to buffer at same offset every time. This will just overwrite previous data. I think you need to use os_mbuf_append()
goto done; | ||
} | ||
/* Length */ | ||
rc = os_mbuf_copyinto(om, OS_MBUF_PKTLEN(om), |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
ditto (os_mbuf_append
)
goto done; | ||
} | ||
/* Value */ | ||
rc = os_mbuf_appendfrom(om, tuples[i].value, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
offset is 0, so os_mbuf_append
can be used afaik
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
os_mbuf_append
needs a flat buffer, the source here is another mbuf
for (i = 0; i < num_tuples; i++) { | ||
pdu_size += (2 * sizeof(uint16_t)) + OS_MBUF_PKTLEN(tuples[i].value); | ||
if (pdu_size > mtu) { | ||
rc = ble_att_clt_tx_multi_notify(conn_handle, om); | ||
if (rc != 0) { | ||
goto done; | ||
} | ||
split_at = i; | ||
i--; | ||
pdu_size = sizeof(uint8_t); | ||
om = ble_hs_mbuf_att_pkt(); | ||
if(om == NULL) { | ||
rc = BLE_HS_ENOMEM; | ||
goto done; | ||
} | ||
continue; | ||
} | ||
/* Handle */ | ||
rc = os_mbuf_copyinto(om, OS_MBUF_PKTLEN(om), | ||
&tuples[i].handle, sizeof tuples[i].handle); | ||
if (rc != 0) { | ||
rc = BLE_HS_ENOMEM; | ||
os_mbuf_free_chain(om); | ||
goto done; | ||
} | ||
/* Length */ | ||
rc = os_mbuf_copyinto(om, OS_MBUF_PKTLEN(om), | ||
&(OS_MBUF_PKTLEN(tuples[i].value)), | ||
sizeof(OS_MBUF_PKTLEN(tuples[i].value))); | ||
if (rc != 0) { | ||
rc = BLE_HS_ENOMEM; | ||
os_mbuf_free_chain(om); | ||
goto done; | ||
} | ||
/* Value */ | ||
rc = os_mbuf_appendfrom(om, tuples[i].value, | ||
0, OS_MBUF_PKTLEN(tuples[i].value)); | ||
if (rc != 0) { | ||
rc = BLE_HS_ENOMEM; | ||
os_mbuf_free_chain(om); | ||
goto done; | ||
} | ||
} | ||
rc = ble_att_clt_tx_multi_notify(conn_handle, om); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
int attr_in_cur_pdu_cnt = 0;
should be added on top, and then
for (i = 0; i < num_tuples; i++) { | |
pdu_size += (2 * sizeof(uint16_t)) + OS_MBUF_PKTLEN(tuples[i].value); | |
if (pdu_size > mtu) { | |
rc = ble_att_clt_tx_multi_notify(conn_handle, om); | |
if (rc != 0) { | |
goto done; | |
} | |
split_at = i; | |
i--; | |
pdu_size = sizeof(uint8_t); | |
om = ble_hs_mbuf_att_pkt(); | |
if(om == NULL) { | |
rc = BLE_HS_ENOMEM; | |
goto done; | |
} | |
continue; | |
} | |
/* Handle */ | |
rc = os_mbuf_copyinto(om, OS_MBUF_PKTLEN(om), | |
&tuples[i].handle, sizeof tuples[i].handle); | |
if (rc != 0) { | |
rc = BLE_HS_ENOMEM; | |
os_mbuf_free_chain(om); | |
goto done; | |
} | |
/* Length */ | |
rc = os_mbuf_copyinto(om, OS_MBUF_PKTLEN(om), | |
&(OS_MBUF_PKTLEN(tuples[i].value)), | |
sizeof(OS_MBUF_PKTLEN(tuples[i].value))); | |
if (rc != 0) { | |
rc = BLE_HS_ENOMEM; | |
os_mbuf_free_chain(om); | |
goto done; | |
} | |
/* Value */ | |
rc = os_mbuf_appendfrom(om, tuples[i].value, | |
0, OS_MBUF_PKTLEN(tuples[i].value)); | |
if (rc != 0) { | |
rc = BLE_HS_ENOMEM; | |
os_mbuf_free_chain(om); | |
goto done; | |
} | |
} | |
rc = ble_att_clt_tx_multi_notify(conn_handle, om); | |
for (i = 0; i < num_tuples; i++) { | |
pdu_size += (2 * sizeof(uint16_t)) + OS_MBUF_PKTLEN(tuples[i].value); | |
/* Multiple handle notification can only be sent with two or more handles */ | |
if (pdu_size > mtu && attr_in_cur_pdu_cnt > 2) { | |
rc = ble_att_clt_tx_multi_notify(conn_handle, om); | |
if (rc != 0) { | |
goto done; | |
} | |
attr_in_cur_pdu_cnt = 0; | |
pdu_size = sizeof(uint8_t); | |
om = ble_hs_mbuf_att_pkt(); | |
if(om == NULL) { | |
rc = BLE_HS_ENOMEM; | |
goto done; | |
} | |
if (num_tuples - i < 2) { | |
BLE_HS_LOG(DEBUG, "Notifications partially sent (%d/%d)", i, num_tuples); | |
i--; | |
goto done; | |
} | |
} | |
/* Handle */ | |
rc = os_mbuf_copyinto(om, OS_MBUF_PKTLEN(om), | |
&tuples[i].handle, sizeof tuples[i].handle); | |
if (rc != 0) { | |
rc = BLE_HS_ENOMEM; | |
os_mbuf_free_chain(om); | |
goto done; | |
} | |
/* Length */ | |
rc = os_mbuf_copyinto(om, OS_MBUF_PKTLEN(om), | |
&(OS_MBUF_PKTLEN(tuples[i].value)), | |
sizeof(OS_MBUF_PKTLEN(tuples[i].value))); | |
if (rc != 0) { | |
rc = BLE_HS_ENOMEM; | |
os_mbuf_free_chain(om); | |
goto done; | |
} | |
/* Value */ | |
rc = os_mbuf_appendfrom(om, tuples[i].value, | |
0, OS_MBUF_PKTLEN(tuples[i].value)); | |
if (rc != 0) { | |
rc = BLE_HS_ENOMEM; | |
os_mbuf_free_chain(om); | |
goto done; | |
} | |
attr_in_pdu_cnt++; | |
} | |
rc = ble_att_clt_tx_multi_notify(conn_handle, om); |
plus what in other comments
/* Tell the application that multiple notification transmissions were attempted. */ | ||
for (i = 0; i < num_tuples; i++) { | ||
ble_gap_notify_tx_event(rc, conn_handle, tuples[i].handle, 0); | ||
os_mbuf_free_chain(tuples[i].value); | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
/* Tell the application that multiple notification transmissions were attempted. */ | |
for (i = 0; i < num_tuples; i++) { | |
ble_gap_notify_tx_event(rc, conn_handle, tuples[i].handle, 0); | |
os_mbuf_free_chain(tuples[i].value); | |
} | |
int j; //this on top | |
/* Tell the application that multiple notification transmissions were attempted. */ | |
for (j = 0; j < i; j++) { | |
ble_gap_notify_tx_event(rc, conn_handle, tuples[j].handle, 0); | |
os_mbuf_free_chain(tuples[j].value); | |
} |
i
now holds how many tuples were actually send (it may be leftover).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is fine because we are assigning 0 to i again
@SumeetSingh19 These features are currently worked on in #1549, however, merging feature of sending multiple handle notifications would be helpful. I added some comments to this feature. Could you please extract only this one and submit? I think it can be done either here or in separate PR |
Added new feature to send and receive multi handle value notifications.
This PR attempts to add support to send and receive multi handle value notifications. This is part of Core spec v5.3 ATT_MULTIPLE_HANDLE_VALUE_NTF (Vol 3, Part F, 3.4.7.4).
On Rx side, code is added to parse the PDU and send NOTIFY_RX events multiple times from stack to registered application.
On Tx Side, code is added to create PDU consisitng of multiple handle value length tuples and sent to att_tx.