Replies: 2 comments 2 replies
-
Well, I wrote a test for my own little software that inserted and then read back 100 000 data objects, and then wrote a similar test for the JDBI framework (both with H2 in-mem database), and the times were pretty equal - about 10% faster for my code, but then it lacks a lot of features (like named parameters, and all the rest of the JDBI features.) I think it is difficult to make it any faster than what I did, considering that at read/write time, there's no extra code running but a tiny loop reading/writing values from/to the resultSet/statement. For every object read/written, this is the only code run (with no extra overhead per data object, just get the next one in the list): For every object in list to be saved on a batch insert (with list of objects):
where
For every row in resultSet on a select:
JDBI test case used (adapted from
My own software testcase used, runs in 800 ms:
and after warmup (duplicating the test-method and running the same thing again), the time difference drops to below 5%. Perhaps more could be done using something like javassist, that would maybe be an interesting endeavour. So to conclude, in this simple scenario I don't really see any significant performance gains to be had. I had worries, but as I tried to implement something that does as little as possible at runtime, I find that (for this simple use-case of saving 100k objects and then reading them back) there's really not much to be done, and I can continue to use JDBI without worrying about performance. |
Beta Was this translation helpful? Give feedback.
-
Hi @perwah , thank you for your interest in this topic. I agree, Jdbi should bind as much as it can ahead of time, although usually the observation is that in most cases the majority of time is spent in the database anyway. We made some significant performance improvements in the upcoming If you do run into any specific performance problems in the Jdbi code, please do report them. But we would like to see an actual use case that it matters for. For running performance tests, I strongly recommend I will mark this as answered, since it sounds like you don't have a specific concern at this time, but please report back if you do run into bottlenecks. |
Beta Was this translation helpful? Give feedback.
-
I found myself debugging into JDBI when developing my project that uses JDBI SQL Objects, and I find that all the complex mapping, finding constructors, fields, mappers and argument-functions etc. is done in runtime when my application wants to read/write to the database. That's a lot of code that has to run every time I call my repository-methods, that always result in the same execution paths.
Wouldn't it be a nice performance gain to look-up and prepare all those things at initialization, resulting in ready-to-run chains of lambdas for each data class, that then just execute when it is time to access the database? This would eliminate the execution of hundreds of lines of code at every invocation.
I spent an afternoon making a project (as a POC) that can introspect domain-object classes and prepare ready-to-run lambdas that create objects when reading, and takes data from objects when writing - all that has to be done when reading or writing an object is to invoke the appropriate lambda-chain, that already knows what getter-methods to read from or constructors to call and populate its parameters.
This way, the code that runs when writing an object to the database looks something like this:
and that's about all that needs to be done by the framework when performing an SQL update. All the sqlValueSetters are lambdas that have been prepared in advance at initialization, that get values from the pre-looked-up getters of the dataObject, and write them to the pre-determined parameters of the statement. No runtime introspection or looking for mappers, just execute.
This means that for instance ConstructorMapper et.al. would be run lazily at first execution and set everything up.
Beta Was this translation helpful? Give feedback.
All reactions