New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Elapsed time for large data transfer #43

Closed
ghost opened this Issue Jun 7, 2015 · 4 comments

Comments

Projects
None yet
1 participant
@ghost

ghost commented Jun 7, 2015

I have an example of large data transfer (about 9000 rows). The classes that represent the data are the following:

public class Province extends RushObject {

private int oid;

private String name;

private String postalCode;

public Province(){
    super();
}
    //getters and setters...
}

public class Location extends RushObject {

private int oid;

private float latitude;

private float longitude;

private String name;

private Province province;

public Location(){
    super();
}
    //getters and setters
}

And this is the piece of code used when extracting the data:

LogUtils.d( "Retrieving locations from Database" );
init = System.currentTimeMillis();
locations = new RushSearch().find(Location.class);
long end = System.currentTimeMillis();
LogUtils.d( "Query Time: " + (end-init) / 1000 );

Elapsed time for real devices is about 20 seconds.

Do you think maybe exists some performance problems? Or is normal so long for querying?

@Stuart-campbell

This comment has been minimized.

Show comment
Hide comment
@Stuart-campbell

Stuart-campbell Jun 7, 2015

Owner

I have done some benchmarking against some of the other ORMs and Rush outdoes ActiveAndroid and Sugar and is a very close behind ormlite. Ormlite is a far thinner wrapper on the sql so ill live with that.

I have noticed that there seems to be a bit of a slow down when hitting about 10000 rows. I recorded 4000 in 2.5 seconds but 10x that took 66 seconds. So I there should be something I can do about that.

You can do a RushSearch async with a callback to help with this. As loading that many rows is never going to be instant.

Thanks

Owner

Stuart-campbell commented Jun 7, 2015

I have done some benchmarking against some of the other ORMs and Rush outdoes ActiveAndroid and Sugar and is a very close behind ormlite. Ormlite is a far thinner wrapper on the sql so ill live with that.

I have noticed that there seems to be a bit of a slow down when hitting about 10000 rows. I recorded 4000 in 2.5 seconds but 10x that took 66 seconds. So I there should be something I can do about that.

You can do a RushSearch async with a callback to help with this. As loading that many rows is never going to be instant.

Thanks

@ghost

This comment has been minimized.

Show comment
Hide comment
@ghost

ghost Jun 7, 2015

Okey @Stuart-campbell !! Probably performance is limited by the possibilities the SQLite. This is the reason why the people of Realm.io decided create her own storage system.

Congrats for your nice work!!

ghost commented Jun 7, 2015

Okey @Stuart-campbell !! Probably performance is limited by the possibilities the SQLite. This is the reason why the people of Realm.io decided create her own storage system.

Congrats for your nice work!!

@Stuart-campbell

This comment has been minimized.

Show comment
Hide comment
@Stuart-campbell

Stuart-campbell Jun 19, 2015

Owner

Marking this as a bug, because of the speed of operations below 4000 rows there should not be an exponential increase in take taken above that number.

Owner

Stuart-campbell commented Jun 19, 2015

Marking this as a bug, because of the speed of operations below 4000 rows there should not be an exponential increase in take taken above that number.

@Stuart-campbell Stuart-campbell added the bug label Jun 19, 2015

@Stuart-campbell

This comment has been minimized.

Show comment
Hide comment
@Stuart-campbell

Stuart-campbell Jun 20, 2015

Owner

Improvements made in v1.1.7 but from doing some reading there is only so much you can do for 50k+ datasets.

Thanks

Owner

Stuart-campbell commented Jun 20, 2015

Improvements made in v1.1.7 but from doing some reading there is only so much you can do for 50k+ datasets.

Thanks

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment