You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
setup:
running this branch via ./bin/run.js and old tables via sf v2.64.6
imported ~76K records into the org by running:
sf data import bulk --file test/test-files/data-project/data/bulkUpsertLarge.csv --sobject account --wait 10
❌ data query -q 'select id,name from account' frozen/consumes a lot of memory
with the default 50K query limit it stays frozen for a few seconds consuming ~5GB, had to kill the process
If I set it to 10K records it finishes after a bit but still see it comsumes ~1GB at the end:
SF_ORG_MAX_QUERY_LIMIT=10000 ./bin/run.js data query -q 'select id,name from account' > new.txt
wc -l new.txt
10005 new.txt
on sf I'm able to query 50K and memory usage stayed at ~180MB during the whole execution.
✅ nested records are properly rendered
./bin/run.js data query -q "SELECT Id, Name, Phone, Website, NumberOfEmployees, Industry, (SELECT Lastname, Title, Email FROM Contacts) FROM Account WHERE Name LIKE 'SampleAccount%'"
❌ bulk upsert --verbose table frozen/empty table
I can see failures if the total qty are small:
If I have ~76K record failures it hangs like data query ⬆️
// add a new field to the CSV large csv so the upsert fails
// NOTE: the following command will also add `new_field` to the header, make sure to remove it manually
awk -F ',' 'BEGIN {OFS = ","} {print $0, "new_field"}' test/test-files/data-project/data/bulkUpsertLarge.csv > badBulkUpsertLarge.csv
NOTE: if I redirect the output to a file it finishes successfully but the table is empty (mem. usage was low compared vs not redirected to a file, seems it skipped data processing?
It might be related to the high number of failures, redirecting the output of 10 record failures I see a valid table on the file.
❌ JSON fields in table are truncated:
the following examples show a small JSON object, for bigger ones try with:
sf data query -q 'SELECT Id, Name, SymbolTable from ApexClass' --use-tooling-api
current:
➜ plugin-data git:(mdonnalley/new-table) ✗ sf data query -q 'select id, isActive, Metadata from RemoteProxy' --use-tooling-api
ID ISACTIVE METADATA
────────────────── ──────── ─────────────────────────────────────
0rp7i000000VS7HAAW true {
"disableProtocolSecurity": false,
"isActive": true,
"url": "http://www.apexdevnet.com",
"urls": null,
"description": null
}
Total number of records retrieved: 1.
Querying Data... done
new:
➜ plugin-data git:(mdonnalley/new-table) ✗ ./bin/run.js data query -q 'select id, isActive, Metadata from RemoteProxy' --use-tooling-api
┌────────────────────┬──────────┬───────────────────────────────────────┐
│ ID │ ISACTIVE │ METADATA │
├────────────────────┼──────────┼───────────────────────────────────────┤
│ 0rp7i000000VS7HAAW │ true │ { "disableProtocolSecurity": false… │
└────────────────────┴──────────┴───────────────────────────────────────┘
Total number of records retrieved: 1.
Querying Data... done
@cristiand391 I think oclif/table#32 addresses most of these issues. I've also push a commit to wrap the json objects in the table instead of truncating them
the 10K limit for styling works but still see high memory usage even with 5K records if I query more than 2 fields:
I'm seeing 800mb-1gb on 9999k and ~200mb on 5k. I have an env var for changing the limit at which ink is no longer used (OCLIF_TABLE_LIMIT) - other than getting rid of ink entirely, there's not another way to decrease the memory usage
iowillhoit
changed the title
feat: use new table
W-16736186 feat: use new table
Jan 27, 2025
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
What does this PR do?
Use new table
What issues does this PR fix or reference?
@W-16736186@