New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Override .max_nb_records #22

Merged
merged 2 commits into from Dec 7, 2018

Conversation

Projects
None yet
3 participants
@Ecsiral

Ecsiral commented Dec 3, 2018

Improve the pagination performance to killbill/killbill-plugin-framework-ruby#67, return 20001 when records are more than 20k.

@@ -187,6 +187,11 @@ def self.search_where_clause(t, search_key)
return where_clause
end
# not needed, put a big number beyond the accurate limit (20k) here to make the pagination work

This comment has been minimized.

@daliwei

daliwei Dec 3, 2018

Contributor

Confused by the comment; if it is "not needed", why we have it :)?

This comment has been minimized.

@Ecsiral

Ecsiral Dec 4, 2018

Because in the ::Killbill::Plugin::ActiveMerchant::ActiveRecord::Response, it uses the min of max_nb_records and input limit to decide the page size.

Setting max_nb_records to a large number would basically make it fine with normal limits.

@pierre

NACK - this basically breaks all deployments of the plugin. I'm not even sure that for 20k+ records, the plugin pagination APIs still work after this change.

My comment in killbill/killbill-plugin-framework-ruby#67 regarding the 20k number was an example of how Kill Bill core handles very large datasets (e.g. account pagination APIs). My suggestion was to take a similar approach for plugins, but the code needs to be written (it's in a complete different codepath).

@Ecsiral

This comment has been minimized.

Ecsiral commented Dec 4, 2018

@pierre I did not see anything broken by doing this.

As I commented above, setting max_nb_records to a large number would work with normal limits of the page size (no more than 20k). I set it to 20001 just because you mentioned that KB core will treat 20k+ as 20k.

If we look at the codes:

def self.search(search_key, kb_tenant_id, offset = 0, limit = 100)
    pagination                  = ::Killbill::Plugin::Model::Pagination.new
    pagination.current_offset   = offset
    pagination.total_nb_records = self.count_by_sql(self.search_query(search_key, kb_tenant_id))
    pagination.max_nb_records   = self.max_nb_records
    pagination.next_offset      = (!pagination.total_nb_records.nil? && offset + limit >= pagination.total_nb_records) ? nil : offset + limit
    # Reduce the limit if the specified value is larger than the number of records
    actual_limit                = [pagination.max_nb_records, limit].min
    pagination.iterator         = ::Killbill::Plugin::ActiveMerchant::ActiveRecord::StreamyResultSet.new(actual_limit) do |offset, limit|
        self.find_by_sql(self.search_query(search_key, kb_tenant_id, offset, limit)).map { |x| x.to_transaction_info_plugin }
    end
    pagination
end

max_nb_records only matters when the page size the client asking is more than the max record number in the database. So to me, it would be more efficient to use a fixed number here than counting the database every time. I'm OK if you think a smaller number (200?) would be better.

Besides that, I think the pagination would be fine cause it actually works with total_nb_records which is not changed.

Could you please give more details about the pagination failure you mentioned?

@pierre

This comment has been minimized.

Member

pierre commented Dec 5, 2018

max_nb_records only matters when the page size the client asking is more than the max record number in the database. So to me, it would be more efficient to use a fixed number here than counting the database every time.

max_nb_records is returned as part of the API here: https://github.com/killbill/killbill-api/blob/3b7a66545e3c319d41b6860132405f2e8193195d/src/main/java/org/killbill/billing/util/entity/Pagination.java#L44-L51

You can follow how it is used in Kill Bill here:

Eventually, the value is used in Kaui for the pagination tables:

If you always say that it's 20K+, the "next" arrows will always be populated, regardless of the exact number. The legend will also probably be always wrong, saying that the dataset has 20k+ records, even if there are only a few hundred records. Am I missing something?

My suggestion in the mentioned ticket was to do something like this https://github.com/killbill/killbill/blob/c64f1ed838bd2333729ef1787e1560dbbab48d62/util/src/main/java/org/killbill/billing/util/entity/dao/DefaultPaginationSqlDaoHelper.java#L79-L83 instead.

@Ecsiral

This comment has been minimized.

Ecsiral commented Dec 5, 2018

I might miss something here, but I don't think a fixed max_nb_records would break the "next" arrow. In my option, for a given pagination, the "next" arrow should be populated according to the total_nb_records and offset, not the max_nb_records.

And for a real working payment system, 20k+ responses would be easy to exceed in a short period. So I still prefer reducing 2 or at least 1 query for every search request to providing a max record number which is only accurate in 20k. Because if the search volume is high the query time matters otherwise if the search volume is low the max number does not matter.

To finalize the change, I'm fine to follow your suggestion if you insist.

@Ecsiral Ecsiral force-pushed the Ecsiral:xinli/override_max_nb_records branch 2 times, most recently from 6b589c4 to 2d1bfd8 Dec 5, 2018

@Ecsiral Ecsiral force-pushed the Ecsiral:xinli/override_max_nb_records branch from 2d1bfd8 to 37ede9c Dec 5, 2018

@pierre

This comment has been minimized.

Member

pierre commented Dec 6, 2018

I might miss something here, but I don't think a fixed max_nb_records would break the "next" arrow. In my option, for a given pagination, the "next" arrow should be populated according to the total_nb_records and offset, not the max_nb_records.

Maybe, to be tested (all of that logic is delegated to DataTables, and it has been a while since I touched this integration). Regardless, the legend would be wrong/confusing.

And for a real working payment system, 20k+ responses would be easy to exceed in a short period.

Agreed, but we have lots of Kill Bill deployments nowadays at a much smaller scale (and typical QA/test environments don't have that many rows).

So I still prefer reducing 2 or at least 1 query for every search request to providing a max record number which is only accurate in 20k. Because if the search volume is high the query time matters otherwise if the search volume is low the max number does not matter.

I'm all in favor to optimize the large scale deployment, but I don't want to compromise on the generic use-case (a brand new deployment would be broken out of the box).

My initial suggestion on the other ticket was to do something similar that Kill Bill core does today, but in the plugin-framework-ruby (that way, you have it for free for CyberSource and PayPal). If you prefer fixing it only for Orbital for now, I'm fine too.

@pierre

pierre approved these changes Dec 6, 2018

:shipit: Were you able to confirm it solves the performance problem (no extra index required / no need to tweak the generated SQL query)?

Also, before I release, can you verify integration tests pass against our sandbox (we don't have them enabled in CI)? Make sure also to verify the plugin works fine in your Kill Bill deployment (I'm fuzzy if the lambda syntax is already compatible with your version of Kill Bill).

@Ecsiral

This comment has been minimized.

Ecsiral commented Dec 6, 2018

For now, the search only applies to Orbital, so I override it to restrict the scope and impact. I'm happy to do the follow up refactor in the framework reop if we think it's the solution to all plugins or they have the same performance issue in search.

@Ecsiral

This comment has been minimized.

Ecsiral commented Dec 6, 2018

@pierre Yes, it would resolve the performance problem. And I built a version locally and deployed it to our staging, the search endpoint works without any smoke test broken.

Please help to release the change.

Thanks

@pierre pierre merged commit 6b606bc into killbill:backport-0.18.x Dec 7, 2018

@Ecsiral Ecsiral deleted the Ecsiral:xinli/override_max_nb_records branch Dec 7, 2018

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment