Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

session: improvment refactor the logic of load data batch insert #11132

Merged
merged 2 commits into from
Jul 12, 2019

Conversation

cfzjywxk
Copy link
Contributor

@cfzjywxk cfzjywxk commented Jul 8, 2019

…ce per transaction

What problem does this PR solve?

issue 3092
load data batch checked all rows for one packet instead of one transaction, will cause too many batch-get to the storage node.
We could do one batch-get only for one transaction.

local rough tests show that
batch get per transaction may not be always better,
the cost depends on (batchget efficiency with more keys ) and batchget call rt
generally, slower batchget calls result in more load data time cost

What is changed and how it works?

make batchget happens one per transcation but not per packet

Check List

Tests

  • Unit test
  • Manual test (add detailed scripts or steps below)

Code changes

  • Has exported function/method change
  • Has exported variable/fields change

Side effects

  • Possible performance regression
  • Increased code complexity

Related changes

@codecov
Copy link

codecov bot commented Jul 8, 2019

Codecov Report

Merging #11132 into master will increase coverage by 0.0089%.
The diff coverage is 76%.

@@              Coverage Diff               @@
##            master    #11132        +/-   ##
==============================================
+ Coverage   81.095%   81.104%   +0.0089%     
==============================================
  Files          420       420                
  Lines        89622     89638        +16     
==============================================
+ Hits         72679     72700        +21     
+ Misses       11684     11670        -14     
- Partials      5259      5268         +9

@codecov
Copy link

codecov bot commented Jul 8, 2019

Codecov Report

Merging #11132 into master will not change coverage.
The diff coverage is n/a.

@@             Coverage Diff             @@
##             master     #11132   +/-   ##
===========================================
  Coverage   81.9464%   81.9464%           
===========================================
  Files           423        423           
  Lines         93239      93239           
===========================================
  Hits          76406      76406           
  Misses        11487      11487           
  Partials       5346       5346

@winoros
Copy link
Member

winoros commented Jul 8, 2019

The title is a little long. Please shorten it.

@@ -213,7 +215,7 @@ func (e *LoadDataInfo) getLine(prevData, curData []byte) ([]byte, []byte, bool)
// If it has the rest of data isn't completed the processing, then it returns without completed data.
// If the number of inserted rows reaches the batchRows, then the second return value is true.
// If prevData isn't nil and curData is nil, there are no other data to deal with and the isEOF is true.
func (e *LoadDataInfo) InsertData(prevData, curData []byte) ([]byte, bool, error) {
func (e *LoadDataInfo) InsertData(prevData, curData []byte, txnBatchCheckIns bool) ([]byte, bool, error) {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

maybe better add some comment for txnBatchCheckIns argument

if err != nil {
return nil, reachLimit, err
if !txnBatchCheckIns {
err := e.CheckAndInsertOneBatch()
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

after this PR, it seems only unit-test logic will step into here, how about this code snippet and remove txnBatchCheckIns let it always be true and modify unit-test to test real check logic

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@lysu @jackysp
some local rought tests(see jira issue 3092 comments) show that batchget per packet may not always be fasttter, should we make it configurable to use batch per transaction or packet?

@cfzjywxk cfzjywxk changed the title session: improment refactor the logic of load data batch insert, make batchget happen on… session: improment refactor the logic of load data batch insert Jul 11, 2019
@cfzjywxk cfzjywxk changed the title session: improment refactor the logic of load data batch insert session: improvment refactor the logic of load data batch insert Jul 11, 2019
@cfzjywxk
Copy link
Contributor Author

/run-all-tests

1 similar comment
@cfzjywxk
Copy link
Contributor Author

/run-all-tests

@cfzjywxk
Copy link
Contributor Author

@jackysp @lysu PTAL

Copy link
Contributor

@lysu lysu left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

@lysu lysu added component/server type/enhancement The issue or PR belongs to an enhancement. status/all tests passed status/LGT1 Indicates that a PR has LGTM 1. labels Jul 12, 2019
Copy link
Member

@jackysp jackysp left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

@jackysp jackysp merged commit 8443282 into pingcap:master Jul 12, 2019
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
component/server status/LGT1 Indicates that a PR has LGTM 1. type/enhancement The issue or PR belongs to an enhancement.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants