New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fix --chunksize parameter of token janitor #1364

Merged
merged 3 commits into from Jan 8, 2019

Conversation

Projects
None yet
3 participants
@fredreichbier
Copy link
Member

fredreichbier commented Jan 3, 2019

This PR

  • adds a new function get_tokens_paginated_generator that implements the improved pagination approach suggested by @plettich in #1323.
  • makes the token janitor use this function to fetch tokens instead of the Pagination-based pagination of get_tokens. This fixes #1322.
  • removes the Pagination-based pagination feature from get_tokens because it isn't used anymore (besides the tests).
  • makes chunked queries optional in the token janitor. Queries are now not chunked by default. This is because users might not want to use chunked queries, e.g. in the case of --action export. If we use chunked queries there, we get a different PSKC file and a different encryption key for each chunk.

Merging #1364 into master will decrease coverage by 0.01%.
The diff coverage is 100%.
Impacted file tree graph

@@            Coverage Diff             @@
##           master    #1364      +/-   ##
==========================================
- Coverage   96.64%   96.63%   -0.02%
==========================================
Files         144      144
Lines       17291    17293       +2
==========================================
Hits        16711    16711
- Misses        580      582       +2
Impacted Files Coverage Δ
privacyidea/lib/token.py 94.93% <100%> (+0.01%) ⬆️
privacyidea/lib/tokens/u2f.py 94.16% <0%> (-1.67%) ⬇️

Continue to review full report at Codecov.

Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update b837f39...a3f7d05. Read the comment docs.

  

fredreichbier added some commits Jan 3, 2019

Add generator for paginated token queries
Make the token janitor use that new generator.

Closes #1322

@fredreichbier fredreichbier requested review from plettich and privacyidea/core Jan 3, 2019

@codecov

This comment has been minimized.

Copy link

codecov bot commented Jan 3, 2019

Codecov Report

Merging #1364 into master will decrease coverage by 0.01%.
The diff coverage is 100%.

Impacted file tree graph

@@            Coverage Diff             @@
##           master    #1364      +/-   ##
==========================================
- Coverage   96.64%   96.63%   -0.02%     
==========================================
  Files         144      144              
  Lines       17291    17293       +2     
==========================================
  Hits        16711    16711              
- Misses        580      582       +2
Impacted Files Coverage Δ
privacyidea/lib/token.py 94.93% <100%> (+0.01%) ⬆️
privacyidea/lib/tokens/u2f.py 94.16% <0%> (-1.67%) ⬇️

Continue to review full report at Codecov.

Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update b837f39...a3f7d05. Read the comment docs.

@plettich
Copy link
Contributor

plettich left a comment

looks good.

@cornelinux

This comment has been minimized.

Copy link
Member

cornelinux commented Jan 8, 2019

Looks good to me.
@fredreichbier We should think about cherry picking this into branch 2.23!

@cornelinux cornelinux merged commit b6a43e7 into master Jan 8, 2019

5 checks passed

ci/circleci Your tests passed on CircleCI!
Details
codecov/patch 100% of diff hit (target 96.64%)
Details
codecov/project Absolute coverage decreased by -0.01% but relative coverage increased by +3.35% compared to b837f39
Details
continuous-integration/travis-ci/pr The Travis CI build passed
Details
continuous-integration/travis-ci/push The Travis CI build passed
Details
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment