Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Performance is slow with larger result sets #15

Closed
abmusse opened this issue Nov 29, 2017 · 11 comments
Closed

Performance is slow with larger result sets #15

abmusse opened this issue Nov 29, 2017 · 11 comments

Comments

@abmusse
Copy link
Member

abmusse commented Nov 29, 2017

Original report by Kristopher Baehr (Bitbucket: krisbaehr, GitHub: krisbaehr).


I've noticed that this connector is slower than jdbc in terms of handling the result sets and I have an idea. The idb connector is converting the result set to json, where as jdbc is not. I'm thinking that this may be the major difference in execution time that I'm seeing. I've tried turning debugging on for the connection but didn't get what I was hoping for.

I'd like someone (I'm willing to help with assistance) to modify dbconn.cc to accumulate time spent performing various functions. Specifically, how much time is being spent converting the odbc result to json. Then, add debug() statements to report these times to the console.

If we find that the json conversion is where a large majority of time is being spent, I would like the maintainers to explore alternate solutions.

2017-11-29 13_04_01-Single thread comparison (qa).ods - LibreOffice Calc.png

Thanks!

@abmusse
Copy link
Member Author

abmusse commented Nov 29, 2017

Original comment by Kristopher Baehr (Bitbucket: krisbaehr, GitHub: krisbaehr).


Node - SQL is doing the same thing the RPG SP is doing.
Node - RPG is calling an SP with the idb connector.
Java - RPG is Java calling the same SP as node using jdbc.

@abmusse
Copy link
Member Author

abmusse commented Nov 29, 2017

Original comment by Aaron Bartell (Bitbucket: aaronbartell, GitHub: aaronbartell).


Hi @krisbaehr,

Could you share the Node.js code you're using to calculate the Node.js stats? If the code is more than 100 lines then maybe create a Bitbucket Snippet.

@abmusse
Copy link
Member Author

abmusse commented Nov 29, 2017

Original comment by Kristopher Baehr (Bitbucket: krisbaehr, GitHub: krisbaehr).


@aaronbartell Be forewarned, I'm pretty new to this Node.js thing!

https://bitbucket.org/snippets/krisbaehr/keBa4E/node-sql-sp-call-for-performance-testing

@abmusse
Copy link
Member Author

abmusse commented Feb 28, 2018

Original comment by Xu Meng (Bitbucket: mengxumx, GitHub: dmabupt).


Hello @krisbaehr ,
I have rebuilt the idb-connector to allow flexible column width to save memory usages. You can reinstall idb-connector and use the new environment value MAXCOLWIDTH to change the default value 32766. (Note: smaller MAXCOLWIDTH value costs less memory but may trim long text)

My test --

#!javascript

for(var colwidth = 128; colwidth < 65535; colwidth *= 2) {
  process.env.MAXCOLWIDTH = colwidth;
  var dbconn = new db.dbconn();
  // fetch / exec ......
}
#!shell

bash-4.4$ node bench.js 8000
fetchAll took 2782.625468 ms -- Row Count: 8000 -- Col Width: 128
fetchAll took 1727.906687 ms -- Row Count: 8000 -- Col Width: 256
fetchAll took 1963.632875 ms -- Row Count: 8000 -- Col Width: 512
fetchAll took 1600.714552 ms -- Row Count: 8000 -- Col Width: 1024
fetchAll took 1616.5393869999998 ms -- Row Count: 8000 -- Col Width: 2048
fetchAll took 2070.021549 ms -- Row Count: 8000 -- Col Width: 4096
fetchAll took 3308.1328750000002 ms -- Row Count: 8000 -- Col Width: 8192
fetchAll took 4757.888971 ms -- Row Count: 8000 -- Col Width: 16384
fetchAll took 7919.675255 ms -- Row Count: 8000 -- Col Width: 32768

@abmusse
Copy link
Member Author

abmusse commented Feb 28, 2018

Original comment by Kristopher Baehr (Bitbucket: krisbaehr, GitHub: krisbaehr).


@dmabupt Thank you for this! I appreciate your diligence in resolving this performance issue. I will give it a shot soon.

@abmusse
Copy link
Member Author

abmusse commented Mar 9, 2018

Original comment by Jesse G (Bitbucket: ThePrez, GitHub: ThePrez).


@krisbaehr, any luck? I'm hoping this shows proof of concept while we work on a more permanent fix.

@abmusse
Copy link
Member Author

abmusse commented Mar 12, 2018

Original comment by Xu Meng (Bitbucket: mengxumx, GitHub: dmabupt).


@krisbaehr Commit b56ab92 and c97e353 fixed the problem. Now the column width is accurate. You can upgrade idb-connector to v1.0.6 to verify that.

@abmusse
Copy link
Member Author

abmusse commented Mar 19, 2018

Original comment by Xu Meng (Bitbucket: mengxumx, GitHub: dmabupt).


@krisbaehr Does version 1.0.6 fix the problem? I forgot to mention that in the fix there is no more environment variable to set the column width. The accurate column width is calculated automatically to save memory usage.

@abmusse
Copy link
Member Author

abmusse commented Apr 10, 2018

Original comment by Kristopher Baehr (Bitbucket: krisbaehr, GitHub: krisbaehr).


@dmabupt, We switched over to version 1.0.6 last week and ran some tests. There was a significant performance improvement! Before db2a was performing about the same as JDBC at 500 rows, after our various tweaks, including OS tweaks for improved multi-threaded processing. Now, 1.0.6 is equal with JDBC at 1000 rows. At 2000 rows, it's still performing pretty well and flat out smokes db2a (I'm not sure what version that was). Thanks for everything. 158.jpg

@abmusse
Copy link
Member Author

abmusse commented Apr 11, 2018

Awesome!

@abmusse
Copy link
Member Author

abmusse commented May 7, 2018

Original comment by Xu Meng (Bitbucket: mengxumx, GitHub: dmabupt).


v1.0.6 fixed it.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

1 participant