You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Large extracts (10M - 100M) are problematic due to query timeouts. PK Chunking resolves this by instructing Bulk API to split a query into several parts, each having a maximum of 250,000 rows, eg:
WHERE Id >= '00vD000003Xwm4z' and Id < '00vD000003YaNYE' ...
WHERE Id >= '00vD000003YqVqq' and Id < '00vD000003YwGXr' ...
WHERE Id >= '00vD000003YyaIS' and Id < '00vD000003ZDtyQ' ...
WHERE Id >= '00vD000003bv6oI' and Id < '00vD000003bxls6' ...
The server implements all this splitting automatically when the PK Chunking header is present. But the data loader client needs additional logic to download and recombine the multiple batch results.
What's the easiest way to implement this? Maybe subclassing Bulk Query Visitor?
The text was updated successfully, but these errors were encountered:
This is a tough problem, which really depending on the actual queried results on how to split the data. You can subclass this query visitor by using some your own specific strategy that suits your own business query. Then you need to aggregate the query result as well.
Large extracts (10M - 100M) are problematic due to query timeouts. PK Chunking resolves this by instructing Bulk API to split a query into several parts, each having a maximum of 250,000 rows, eg:
The server implements all this splitting automatically when the PK Chunking header is present. But the data loader client needs additional logic to download and recombine the multiple batch results.
What's the easiest way to implement this? Maybe subclassing Bulk Query Visitor?
The text was updated successfully, but these errors were encountered: