You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
The iterator fetches pages with overlapping time by one second, with the assumption that only a couple of transactions can be located within that second and so there will only be a few duplicates to discard.
The problem is that creating batch payments will result in all payments in that batch being created pretty much at the same time, with many dozens (at least) being created in the same millisecond. This can cause the same page of payments to be fetched over and over, and discarded as duplicates in its entirety each time.
To fix this, fetching a second and subsequent page much start at exactly the same time as the last record of the previous page, not at a slightly rewound timestamp. If the next page is discarded in its entirety as duplicates, then the time should be wound forward by 1mS and tried again. There is a risk that some payments in the previous page may be missed. However, a payments page seems to be quite large - well over 1000 payments - so that risk is small for now.
Ideally the "wind on" time will be 1uS, but I don't believe the search conditions goes down below mS resolution at this time.
The text was updated successfully, but these errors were encountered:
The "last updated since" resolution when fetching payments seems to be whole seconds. Checking the Xero API docs, it says "accurate to a second", so we need to extend the timestamp by a second to get the next page if we get a page of duplicates.
The iterator fetches pages with overlapping time by one second, with the assumption that only a couple of transactions can be located within that second and so there will only be a few duplicates to discard.
The problem is that creating batch payments will result in all payments in that batch being created pretty much at the same time, with many dozens (at least) being created in the same millisecond. This can cause the same page of payments to be fetched over and over, and discarded as duplicates in its entirety each time.
To fix this, fetching a second and subsequent page much start at exactly the same time as the last record of the previous page, not at a slightly rewound timestamp. If the next page is discarded in its entirety as duplicates, then the time should be wound forward by 1mS and tried again. There is a risk that some payments in the previous page may be missed. However, a payments page seems to be quite large - well over 1000 payments - so that risk is small for now.
Ideally the "wind on" time will be 1uS, but I don't believe the search conditions goes down below mS resolution at this time.
The text was updated successfully, but these errors were encountered: