Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Performance issue on multiple insert #141

Closed
Serge-SDL opened this issue May 21, 2019 · 6 comments
Closed

Performance issue on multiple insert #141

Serge-SDL opened this issue May 21, 2019 · 6 comments
Labels

Comments

@Serge-SDL
Copy link

@Serge-SDL Serge-SDL commented May 21, 2019

Hello,

I need to manage big lists of elements (for example every cities in France, around 36 K) and I tried to perform some massive insert and the performance is not good.
For exemple, insert 1000 cities:

  • web -> 147s (the computer is not very performant but still)
  • mobile via nativescript + android -> 12s (better but not good).

Here is the code:

        nSQL().createDatabase({ 
            id: 'maindb', 
            mode: 'PERM', 
            tables: [ 
                { 
                    name: 'city', 
                    model: { 
                        'id:uuid': {pk: true}, 
                        'age:float': {notNull: false}, 
                        'name:string': {notNull: false}, 
                        'api_id:int': {notNull: false}, 
                        'last_update:int': {notNull: false} 
                    } 
                } 
            ], 
        }).then(res => { 
            console.log('create database', ((new Date()).getTime() - time) / 1000 ); 
            console.log('res', res); 
            nSQL('city') 
            .query('upsert', test.documents) 
            .exec().then(r => { 
                console.log('create elements', ((new Date()).getTime() - time) / 1000 ); 
                console.log('aaaa', r); 
            }).catch((error) => console.log('error', error)); 
        }); 

(test.documents is a json with 1000 cities)

do I do something wrong? Or is this normal ?
thanks!

@ClickSimply

This comment has been minimized.

Copy link
Owner

@ClickSimply ClickSimply commented May 26, 2019

Raw import might be more appropriate for what you're needing:
https://nanosql.io/query/import-export.html#raw-import-export

It drops the rows directly into the datastore without cleaning them up or making sure they conform to the data model, it's much faster but you have to make sure your rows are already like they're supposed to be.

@Serge-SDL

This comment has been minimized.

Copy link
Author

@Serge-SDL Serge-SDL commented May 28, 2019

@ClickSimply
thanks for your answer but it does not change the performances:
for 1000 imports: web: 147s before -> 158s with raw import :/

@ClickSimply

This comment has been minimized.

Copy link
Owner

@ClickSimply ClickSimply commented Jun 10, 2019

Working on a transaction API in the next release, should help a ton with this.

@Serge-SDL

This comment has been minimized.

Copy link
Author

@Serge-SDL Serge-SDL commented Jun 11, 2019

@ClickSimply ok, great, looking forward to test this :)

@ClickSimply

This comment has been minimized.

Copy link
Owner

@ClickSimply ClickSimply commented Jul 23, 2019

2.3.5 will use transactions with rawImport queries. Doing some testing, I found that with indexedDB an insert of 5000 rows went from 22 seconds down to 1.5 seconds on the same device.

ClickSimply pushed a commit that referenced this issue Jul 23, 2019
@ClickSimply

This comment has been minimized.

Copy link
Owner

@ClickSimply ClickSimply commented Jul 23, 2019

2.3.5 is out and resolves this issue, marking this done.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
2 participants
You can’t perform that action at this time.