Skip to content
This repository has been archived by the owner on Mar 10, 2023. It is now read-only.

Request timeout issue occurring sporadically #10

Open
VandanaLP opened this issue Sep 14, 2020 · 4 comments
Open

Request timeout issue occurring sporadically #10

VandanaLP opened this issue Sep 14, 2020 · 4 comments

Comments

@VandanaLP
Copy link

During execution of method getConnection() on pool object, we receive the below error

"Request timeout. Request info: (id: 54, creation time: 1599638811640)","type":"log","custom_fields":{"stack":"Error: Request timeout. Request info: (id: 54, creation time: 1599638811640)\n at Request._fireTimeout (/home/vcap/app/node_modules/hdb-pool/lib/Request.js:114:17)\n at Timeout. (/home/vcap/app/node_modules/hdb-pool/lib/Request.js:92:43)\n at listOnTimeout (internal/timers.js:549:17)\n at processTimers (internal/timers.js:492:7)"}}

As this is sporadically occurring, I am not quite sure what causes this issue. The only way to resolve this as of now is to re-start the application.

Pool status object
"pool":"{"size":0,"min":2,"max":10,"available":0,"timeout":0}","request":"{"number":1,"pending":0,"max":0,"resolved":53,"rejected":0,"timeout":2}"}

Could you please help me understand how can we resolve this? This is causing downtime for our application.

Thanks!

@ckyycc
Copy link
Owner

ckyycc commented Sep 15, 2020

The pool status object you mentioned above is not the one when the issue happened, right?

Normally having request timeout is because there is no room left in the pool.
If you already did pool.release(client) for every execution, one reason may because some queries are very slow, all the connections (max 10) have been used (no room left in the pool). If there is no room in the pool, the request will be timed out after the defined requestTimeout (by default it is 5 seconds).

So, you may increase the max option for the pool, or increase the requestTimeout.

@VandanaLP
Copy link
Author

The pool status object you mentioned above is not the one when the issue happened, right?

Normally having request timeout is because there is no room left in the pool.
If you already did pool.release(client) for every execution, one reason may because some queries are very slow, all the connections (max 10) have been used (no room left in the pool). If there is no room in the pool, the request will be timed out after the defined requestTimeout (by default it is 5 seconds).

So, you may increase the max option for the pool, or increase the requestTimeout.

Thank you for your inputs. The pool status mentioned is the status when the timeout actually occurred. Therefore, I am also confused what could cause this.
I also verified if there was any issue as such during HANA connection, but this does not seem to be the reason. No other application connecting to this very same HANA system is affected.

@ckyycc
Copy link
Owner

ckyycc commented Sep 16, 2020

The pool status object you mentioned above is not the one when the issue happened, right?
Normally having request timeout is because there is no room left in the pool.
If you already did pool.release(client) for every execution, one reason may because some queries are very slow, all the connections (max 10) have been used (no room left in the pool). If there is no room in the pool, the request will be timed out after the defined requestTimeout (by default it is 5 seconds).
So, you may increase the max option for the pool, or increase the requestTimeout.

Thank you for your inputs. The pool status mentioned is the status when the timeout actually occurred. Therefore, I am also confused what could cause this.
I also verified if there was any issue as such during HANA connection, but this does not seem to be the reason. No other application connecting to this very same HANA system is affected.

Well, I still think it might be related to the hana connection during that period of time. If conn.connect failed, the createPoolResource (Operator.js) will be failed as well and the related resource will be removed from the pool. Please have a check with the indexserver trace, whether you can see the error/warning/info that relates to the connection issue. As you mentioned, this issue is sporadically occurring, please enable the trexnet info trace and sqlsession debug trace. To avoid the traces get overwritten, you can increase the trace size and trace number via:
alter system alter configuration ('global.ini','SYSTEM') set ('trace','maxfiles') = '100' with reconfigure;
alter system alter configuration ('global.ini','SYSTEM') set ('trace','maxfilesize') = '100000000' with reconfigure;

to restore it:
alter system alter configuration ('global.ini','SYSTEM') unset ('trace','maxfiles') with reconfigure;
alter system alter configuration ('global.ini','SYSTEM') unset ('trace','maxfilesize') with reconfigure;

Please let me know if you can reproduce this on your test system, then we may enable the debug trace for the pool.

Regards,
CK

@sassman
Copy link

sassman commented Oct 29, 2020

I've used the generic-pool package with @sap/hana-client on a project too and ran into this issue as well.
It was super hard to debug and to reproduce but it seems to be a flaw in the pool release mechanism.

However, eventually I have dismissed the pool completely in favour of the native pooling mechanism of the hana-client and that turns out to work like charm.

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants