Skip to content

fix: wait until all records are emitted#316

Merged
mshanemc merged 11 commits intomainfrom
cd/wait-end
Jun 15, 2022
Merged

fix: wait until all records are emitted#316
mshanemc merged 11 commits intomainfrom
cd/wait-end

Conversation

@cristiand391
Copy link
Copy Markdown
Member

@cristiand391 cristiand391 commented Jun 10, 2022

What does this PR do?

Makes SoqlQuery wait until all records are emitted from jsforce, also it accepts a config aggregator as a parameter(no default, the caller should pass this.configAggregator or create a new instance.

others:
added 2 NUTs for maxQueryLimit
reverted default limit back to 50000

What issues does this PR fix or reference?

@W-11204187@

forcedotcom/cli#1543

Co-authored-by: Ken Lewis <46458081+klewis-sfdc@users.noreply.github.com>
const queryOpts: Partial<QueryOptions> = {
autoFetch: true,
maxFetch: (config.getInfo('maxQueryLimit').value as number) || 10000,
maxFetch: (configAgg.getInfo('maxQueryLimit').value as number) || 50000,
Copy link
Copy Markdown
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

the default limit in sfdx-core was 10000 but plugin-data always set it to 50000, changed to 10000 here: https://github.com/salesforcecli/plugin-data/pull/306/files#diff-1a06fd5095e4d783ae9b6e36afb369a23a237384d219007547e157467dd7f4e7L39

shell.exec('sfdx config:set maxQueryLimit=3756 -g', { silent: true });

const soqlQuery = 'SELECT Id FROM ScratchOrgInfo';
const queryCmd = `force:data:soql:query --query "${soqlQuery}" --json --targetusername ${hubOrgUsername}`;
Copy link
Copy Markdown
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

runQuery doesn't allow to pass a username, this is the only NUT using the hub so I just copy-pasted some checks in runQuery instead of refactor everything.

@mshanemc
Copy link
Copy Markdown
Contributor

QA notes:
./bin/dev force:data:soql:query -u hub -q 'select id from scratchOrgInfo limit 5'
✅ respects my limit in the query

./bin/dev force:data:soql:query -u hub -q 'select id from scratchOrgInfo' > output.txt
✅ got 50k records and a warning that says

Warning: The query result is missing 452928 records due to a 50000 record limit. Increase the number of records returned by setting the config value "maxQueryLimit" or the environment variable "SFDX_MAX_QUERY_LIMIT" to 502928 or greater than 50000.

SFDX_MAX_QUERY_LIMIT=1000000 ./bin/dev force:data:soql:query -u hub -q 'select id from scratchOrgInfo' > output.txt
❌ env has no effect...got 50k records again
but
ORG_MAX_QUERY_LIMIT=1000000 ./bin/dev force:data:soql:query -u hub -q 'select id from scratchOrgInfo' > output.txt
❌ ERROR running force:data:soql:query: Maximum call stack size exceeded (it definitely tried to query that many)

sfdx config:set -g maxQueryLimit=20
./bin/dev force:data:soql:query -u hub -q 'select id from scratchOrgInfo' > output.txt
✅ does 20 records with warning as expected

sfdx config:set -g maxQueryLimit=20000000
./bin/dev force:data:soql:query -u hub -q 'select id from scratchOrgInfo' > output.txt
(wow, this takes a long time)
❌ ERROR running force:data:soql:query: Maximum call stack size exceeded

@mshanemc
Copy link
Copy Markdown
Contributor

QA:
✅ core change fixes the env not working
SFDX_MAX_QUERY_LIMIT=60000 ./bin/dev force:data:soql:query -u hub -q 'select id from scratchOrgInfo' > output.txt Warning: The query result is missing 443295 records due to a 60000 record limit. Increase the number of records returned by setting the config value "maxQueryLimit" or the environment variable "SFDX_MAX_QUERY_LIMIT" to 503295 or greater than 60000.

the call stack issue was opened on oclif here: oclif/core#432

@mshanemc mshanemc merged commit 4269d08 into main Jun 15, 2022
@mshanemc mshanemc deleted the cd/wait-end branch June 15, 2022 19:52
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants