Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Zendesk Support Limits Users stream to 100,000 #11895

Closed
danieldiamond opened this issue Apr 11, 2022 · 1 comment · Fixed by #12122
Closed

Zendesk Support Limits Users stream to 100,000 #11895

danieldiamond opened this issue Apr 11, 2022 · 1 comment · Fixed by #12122

Comments

@danieldiamond
Copy link
Contributor

danieldiamond commented Apr 11, 2022

Environment

  • Airbyte version: 0.35.64-alpha
  • OS Version / Instance: AWS EC2
  • Deployment: Docker
  • Source Connector and version: Zendesk Support (0.2.5)
  • Destination Connector and version: Snowflake (0.4.24)
  • Severity: Critical
  • Step where error happened: Sync

zendesk support connector does not pull all users for the Users stream. It is limited to 100,000.

If you read the API docs, it explicitly details this here

Returns an approximate count of users. If the count exceeds 100,000, it is updated every 24 hours.
The response includes a refreshed_at property in a count object that contains a timestamp indicating when the count was last updated.
Note: When the count exceeds 100,000, the refreshed_at property may occasionally be null. This indicates that the count is being updated in the background. The count object's value property is limited to 100,000 until the update is complete.

Screen Shot 2022-04-15 at 1 28 01 pm
Screen Shot 2022-04-15 at 1 28 55 pm
Screen Shot 2022-04-15 at 1 28 08 pm

Expected Behavior

The connector should sync all users

Logs

If applicable, please upload the logs from the failing operation.
For sync jobs, you can download the full logs from the UI by going to the sync attempt page and
clicking the download logs button at the top right of the logs display window.

LOG

2022-04-15 03:18:08 �[32mINFO�[m i.a.w.w.WorkerRun(call):49 - Executing worker wrapper. Airbyte version: 0.35.64-alpha
2022-04-15 03:18:08 �[32mINFO�[m i.a.w.t.TemporalAttemptExecution(get):105 - Docker volume job log path: /tmp/workspace/16336/0/logs.log
2022-04-15 03:18:08 �[32mINFO�[m i.a.w.t.TemporalAttemptExecution(get):110 - Executing worker wrapper. Airbyte version: 0.35.64-alpha
2022-04-15 03:18:08 �[32mINFO�[m i.a.w.DefaultReplicationWorker(run):104 - start sync worker. job id: 16336 attempt id: 0
2022-04-15 03:18:08 �[32mINFO�[m i.a.w.DefaultReplicationWorker(run):116 - configured sync modes: {null.users=full_refresh - overwrite}
2022-04-15 03:18:08 �[32mINFO�[m i.a.w.p.a.DefaultAirbyteDestination(start):69 - Running destination...
2022-04-15 03:18:08 �[32mINFO�[m i.a.c.i.LineGobbler(voidCall):82 - Checking if airbyte/destination-snowflake:0.4.24 exists...
2022-04-15 03:18:08 �[32mINFO�[m i.a.c.i.LineGobbler(voidCall):82 - airbyte/destination-snowflake:0.4.24 was found locally.
2022-04-15 03:18:08 �[32mINFO�[m i.a.w.p.DockerProcessFactory(create):106 - Creating docker job ID: 16336
2022-04-15 03:18:08 �[32mINFO�[m i.a.w.p.DockerProcessFactory(create):158 - Preparing command: docker run --rm --init -i -w /data/16336/0 --log-driver none --network host -v airbyte_workspace:/data -v /tmp/airbyte_local:/local -e WORKER_CONNECTOR_IMAGE=airbyte/destination-snowflake:0.4.24 -e WORKER_JOB_ATTEMPT=0 -e AIRBYTE_ROLE= -e WORKER_ENVIRONMENT=DOCKER -e AIRBYTE_VERSION=0.35.64-alpha -e WORKER_JOB_ID=16336 airbyte/destination-snowflake:0.4.24 write --config destination_config.json --catalog destination_catalog.json
2022-04-15 03:18:08 �[32mINFO�[m i.a.c.i.LineGobbler(voidCall):82 - Checking if airbyte/source-zendesk-support:0.2.5 exists...
2022-04-15 03:18:08 �[32mINFO�[m i.a.c.i.LineGobbler(voidCall):82 - airbyte/source-zendesk-support:0.2.5 was found locally.
2022-04-15 03:18:08 �[32mINFO�[m i.a.w.p.DockerProcessFactory(create):106 - Creating docker job ID: 16336
2022-04-15 03:18:08 �[32mINFO�[m i.a.w.p.DockerProcessFactory(create):158 - Preparing command: docker run --rm --init -i -w /data/16336/0 --log-driver none --network host -v airbyte_workspace:/data -v /tmp/airbyte_local:/local -e WORKER_CONNECTOR_IMAGE=airbyte/source-zendesk-support:0.2.5 -e WORKER_JOB_ATTEMPT=0 -e AIRBYTE_ROLE= -e WORKER_ENVIRONMENT=DOCKER -e AIRBYTE_VERSION=0.35.64-alpha -e WORKER_JOB_ID=16336 airbyte/source-zendesk-support:0.2.5 read --config source_config.json --catalog source_catalog.json
2022-04-15 03:18:08 �[32mINFO�[m i.a.w.DefaultReplicationWorker(run):158 - Waiting for source and destination threads to complete.
2022-04-15 03:18:08 �[32mINFO�[m i.a.w.DefaultReplicationWorker(lambda$getDestinationOutputRunnable$6):339 - Destination output thread started.
2022-04-15 03:18:08 �[32mINFO�[m i.a.w.DefaultReplicationWorker(lambda$getReplicationRunnable$5):279 - Replication thread started.
2022-04-15 03:18:12 �[43mdestination�[0m > SLF4J: Class path contains multiple SLF4J bindings.
2022-04-15 03:18:12 �[43mdestination�[0m > SLF4J: Found binding in [jar:file:/airbyte/lib/log4j-slf4j-impl-2.17.1.jar!/org/slf4j/impl/StaticLoggerBinder.class]
2022-04-15 03:18:12 �[43mdestination�[0m > SLF4J: Found binding in [jar:file:/airbyte/lib/slf4j-log4j12-1.7.25.jar!/org/slf4j/impl/StaticLoggerBinder.class]
2022-04-15 03:18:12 �[43mdestination�[0m > SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
2022-04-15 03:18:12 �[43mdestination�[0m > SLF4J: Actual binding is of type [org.apache.logging.slf4j.Log4jLoggerFactory]
2022-04-15 03:18:13 �[44msource�[0m > Starting syncing SourceZendeskSupport
2022-04-15 03:18:13 �[44msource�[0m > Syncing stream: users
2022-04-15 03:18:13 �[43mdestination�[0m > 2022-04-15 03:18:13 �[32mINFO�[m i.a.i.b.IntegrationCliParser(parseOptions):118 - integration args: {catalog=destination_catalog.json, write=null, config=destination_config.json}
2022-04-15 03:18:13 �[43mdestination�[0m > 2022-04-15 03:18:13 �[32mINFO�[m i.a.i.b.IntegrationRunner(runInternal):121 - Running integration: io.airbyte.integrations.destination.snowflake.SnowflakeDestination
2022-04-15 03:18:13 �[43mdestination�[0m > 2022-04-15 03:18:13 �[32mINFO�[m i.a.i.b.IntegrationRunner(runInternal):122 - Command: WRITE
2022-04-15 03:18:13 �[43mdestination�[0m > 2022-04-15 03:18:13 �[32mINFO�[m i.a.i.b.IntegrationRunner(runInternal):123 - Integration config: IntegrationConfig{command=WRITE, configPath='destination_config.json', catalogPath='destination_catalog.json', statePath='null'}
2022-04-15 03:18:13 �[43mdestination�[0m > 2022-04-15 03:18:13 �[33mWARN�[m c.n.s.JsonMetaSchema(newValidator):338 - Unknown keyword examples - you should define your own Meta Schema. If the keyword is irrelevant for validation, just use a NonValidationKeyword
2022-04-15 03:18:13 �[43mdestination�[0m > 2022-04-15 03:18:13 �[33mWARN�[m c.n.s.JsonMetaSchema(newValidator):338 - Unknown keyword order - you should define your own Meta Schema. If the keyword is irrelevant for validation, just use a NonValidationKeyword
2022-04-15 03:18:13 �[43mdestination�[0m > 2022-04-15 03:18:13 �[33mWARN�[m c.n.s.JsonMetaSchema(newValidator):338 - Unknown keyword airbyte_secret - you should define your own Meta Schema. If the keyword is irrelevant for validation, just use a NonValidationKeyword
2022-04-15 03:18:13 �[43mdestination�[0m > 2022-04-15 03:18:13 �[33mWARN�[m c.n.s.JsonMetaSchema(newValidator):338 - Unknown keyword multiline - you should define your own Meta Schema. If the keyword is irrelevant for validation, just use a NonValidationKeyword
2022-04-15 03:18:13 �[43mdestination�[0m > 2022-04-15 03:18:13 �[32mINFO�[m i.a.i.d.j.c.SwitchingDestination(getConsumer):65 - Using destination type: COPY_S3
2022-04-15 03:18:13 �[43mdestination�[0m > 2022-04-15 03:18:13 �[33mWARN�[m i.a.i.d.s.SnowflakeDatabase(createDataSource):96 - Obsolete User/password login mode is used. Please re-create a connection to use the latest connector's version
2022-04-15 03:18:13 �[43mdestination�[0m > 2022-04-15 03:18:13 �[32mINFO�[m i.a.i.d.s.S3DestinationConfig(createS3Client):169 - Creating S3 client...
2022-04-15 03:18:13 �[43mdestination�[0m > 2022-04-15 03:18:13 �[32mINFO�[m i.a.i.d.s.StagingConsumerFactory(lambda$toWriteConfig$0):96 - Write config: WriteConfig{streamName=test_users, namespace=zendesk_support, outputSchemaName=zendesk_support, tmpTableName=_airbyte_tmp_bze_test_users, outputTableName=_airbyte_raw_test_users, syncMode=overwrite}
2022-04-15 03:18:13 �[43mdestination�[0m > 2022-04-15 03:18:13 �[32mINFO�[m i.a.i.d.b.BufferedStreamConsumer(startTracked):116 - class io.airbyte.integrations.destination.buffered_stream_consumer.BufferedStreamConsumer started.
2022-04-15 03:18:13 �[43mdestination�[0m > 2022-04-15 03:18:13 �[32mINFO�[m i.a.i.d.s.StagingConsumerFactory(lambda$onStartFunction$2):114 - Preparing tmp tables in destination started for 1 streams
2022-04-15 03:18:13 �[43mdestination�[0m > 2022-04-15 03:18:13 �[32mINFO�[m i.a.i.d.s.StagingConsumerFactory(lambda$onStartFunction$2):122 - Preparing staging area in destination started for schema zendesk_support stream test_users: tmp table: _airbyte_tmp_bze_test_users, stage: ZENDESK_SUPPORT_TEST_USERS/2022/04/15/03/99D32BF3-4E55-40A6-812B-EF567AC54A35/
2022-04-15 03:18:13 �[43mdestination�[0m > 2022-04-15 03:18:13 �[32mINFO�[m c.z.h.HikariDataSource(getConnection):110 - HikariPool-1 - Starting...
2022-04-15 03:18:16 �[43mdestination�[0m > 2022-04-15 03:18:16 �[32mINFO�[m c.z.h.p.HikariPool(checkFailFast):565 - HikariPool-1 - Added connection net.snowflake.client.jdbc.SnowflakeConnectionV1@66e21568
2022-04-15 03:18:16 �[43mdestination�[0m > 2022-04-15 03:18:16 �[32mINFO�[m c.z.h.HikariDataSource(getConnection):123 - HikariPool-1 - Start completed.
2022-04-15 03:18:17 �[43mdestination�[0m > 2022-04-15 03:18:17 �[32mINFO�[m i.a.d.j.DefaultJdbcDatabase(lambda$unsafeQuery$1):106 - closing connection
2022-04-15 03:18:19 �[43mdestination�[0m > 2022-04-15 03:18:19 �[32mINFO�[m i.a.i.d.s.S3StorageOperations(createBucketObjectIfNotExists):72 - Storage Object my-bucket/ZENDESK_SUPPORT_TEST_USERS does not exist in bucket; creating...
2022-04-15 03:18:19 �[43mdestination�[0m > 2022-04-15 03:18:19 �[32mINFO�[m i.a.i.d.s.S3StorageOperations(createBucketObjectIfNotExists):74 - Storage Object my-bucket/ZENDESK_SUPPORT_TEST_USERS has been created in bucket.
2022-04-15 03:18:19 �[43mdestination�[0m > 2022-04-15 03:18:19 �[32mINFO�[m i.a.i.d.s.StagingConsumerFactory(lambda$onStartFunction$2):133 - Preparing staging area in destination completed for schema zendesk_support stream test_users
2022-04-15 03:18:19 �[43mdestination�[0m > 2022-04-15 03:18:19 �[32mINFO�[m i.a.i.d.s.StagingConsumerFactory(lambda$onStartFunction$2):136 - Preparing tmp tables in destination completed.
2022-04-15 03:18:19 �[43mdestination�[0m > 2022-04-15 03:18:19 �[32mINFO�[m i.a.i.d.r.SerializedBufferingStrategy(lambda$addRecord$0):55 - Starting a new buffer for stream test_users (current state: 0 bytes in 0 buffers)
2022-04-15 03:18:19 �[32mINFO�[m i.a.w.DefaultReplicationWorker(lambda$getReplicationRunnable$5):301 - Records read: 1000 (1 MB)
2022-04-15 03:18:20 �[32mINFO�[m i.a.w.DefaultReplicationWorker(lambda$getReplicationRunnable$5):301 - Records read: 2000 (2 MB)
2022-04-15 03:18:20 �[32mINFO�[m i.a.w.DefaultReplicationWorker(lambda$getReplicationRunnable$5):301 - Records read: 3000 (3 MB)
2022-04-15 03:18:21 �[32mINFO�[m i.a.w.DefaultReplicationWorker(lambda$getReplicationRunnable$5):301 - Records read: 4000 (4 MB)
2022-04-15 03:18:22 �[32mINFO�[m i.a.w.DefaultReplicationWorker(lambda$getReplicationRunnable$5):301 - Records read: 5000 (5 MB)
2022-04-15 03:18:24 �[32mINFO�[m i.a.w.DefaultReplicationWorker(lambda$getReplicationRunnable$5):301 - Records read: 6000 (6 MB)
2022-04-15 03:18:25 �[32mINFO�[m i.a.w.DefaultReplicationWorker(lambda$getReplicationRunnable$5):301 - Records read: 7000 (7 MB)
2022-04-15 03:18:27 �[32mINFO�[m i.a.w.DefaultReplicationWorker(lambda$getReplicationRunnable$5):301 - Records read: 8000 (8 MB)
2022-04-15 03:18:29 �[32mINFO�[m i.a.w.DefaultReplicationWorker(lambda$getReplicationRunnable$5):301 - Records read: 9000 (9 MB)
2022-04-15 03:18:30 �[32mINFO�[m i.a.w.DefaultReplicationWorker(lambda$getReplicationRunnable$5):301 - Records read: 10000 (10 MB)
2022-04-15 03:18:32 �[32mINFO�[m i.a.w.DefaultReplicationWorker(lambda$getReplicationRunnable$5):301 - Records read: 11000 (12 MB)
2022-04-15 03:18:33 �[32mINFO�[m i.a.w.DefaultReplicationWorker(lambda$getReplicationRunnable$5):301 - Records read: 12000 (13 MB)
2022-04-15 03:18:35 �[32mINFO�[m i.a.w.DefaultReplicationWorker(lambda$getReplicationRunnable$5):301 - Records read: 13000 (14 MB)
2022-04-15 03:18:37 �[32mINFO�[m i.a.w.DefaultReplicationWorker(lambda$getReplicationRunnable$5):301 - Records read: 14000 (15 MB)
2022-04-15 03:18:39 �[32mINFO�[m i.a.w.DefaultReplicationWorker(lambda$getReplicationRunnable$5):301 - Records read: 15000 (16 MB)
2022-04-15 03:18:40 �[32mINFO�[m i.a.w.DefaultReplicationWorker(lambda$getReplicationRunnable$5):301 - Records read: 16000 (17 MB)
2022-04-15 03:18:42 �[32mINFO�[m i.a.w.DefaultReplicationWorker(lambda$getReplicationRunnable$5):301 - Records read: 17000 (18 MB)
2022-04-15 03:18:43 �[32mINFO�[m i.a.w.DefaultReplicationWorker(lambda$getReplicationRunnable$5):301 - Records read: 18000 (19 MB)
2022-04-15 03:18:45 �[32mINFO�[m i.a.w.DefaultReplicationWorker(lambda$getReplicationRunnable$5):301 - Records read: 19000 (20 MB)
2022-04-15 03:18:47 �[32mINFO�[m i.a.w.DefaultReplicationWorker(lambda$getReplicationRunnable$5):301 - Records read: 20000 (21 MB)
2022-04-15 03:18:49 �[32mINFO�[m i.a.w.DefaultReplicationWorker(lambda$getReplicationRunnable$5):301 - Records read: 21000 (23 MB)
2022-04-15 03:18:50 �[32mINFO�[m i.a.w.DefaultReplicationWorker(lambda$getReplicationRunnable$5):301 - Records read: 22000 (24 MB)
2022-04-15 03:18:52 �[32mINFO�[m i.a.w.DefaultReplicationWorker(lambda$getReplicationRunnable$5):301 - Records read: 23000 (25 MB)
2022-04-15 03:18:54 �[32mINFO�[m i.a.w.DefaultReplicationWorker(lambda$getReplicationRunnable$5):301 - Records read: 24000 (26 MB)
2022-04-15 03:18:55 �[32mINFO�[m i.a.w.DefaultReplicationWorker(lambda$getReplicationRunnable$5):301 - Records read: 25000 (27 MB)
2022-04-15 03:18:57 �[32mINFO�[m i.a.w.DefaultReplicationWorker(lambda$getReplicationRunnable$5):301 - Records read: 26000 (28 MB)
2022-04-15 03:18:58 �[32mINFO�[m i.a.w.DefaultReplicationWorker(lambda$getReplicationRunnable$5):301 - Records read: 27000 (29 MB)
2022-04-15 03:19:01 �[32mINFO�[m i.a.w.DefaultReplicationWorker(lambda$getReplicationRunnable$5):301 - Records read: 28000 (30 MB)
2022-04-15 03:19:03 �[32mINFO�[m i.a.w.DefaultReplicationWorker(lambda$getReplicationRunnable$5):301 - Records read: 29000 (31 MB)
2022-04-15 03:19:04 �[32mINFO�[m i.a.w.DefaultReplicationWorker(lambda$getReplicationRunnable$5):301 - Records read: 30000 (32 MB)
2022-04-15 03:19:05 �[32mINFO�[m i.a.w.DefaultReplicationWorker(lambda$getReplicationRunnable$5):301 - Records read: 31000 (34 MB)
2022-04-15 03:19:07 �[32mINFO�[m i.a.w.DefaultReplicationWorker(lambda$getReplicationRunnable$5):301 - Records read: 32000 (35 MB)
2022-04-15 03:19:09 �[32mINFO�[m i.a.w.DefaultReplicationWorker(lambda$getReplicationRunnable$5):301 - Records read: 33000 (36 MB)
2022-04-15 03:19:11 �[32mINFO�[m i.a.w.DefaultReplicationWorker(lambda$getReplicationRunnable$5):301 - Records read: 34000 (37 MB)
2022-04-15 03:19:13 �[32mINFO�[m i.a.w.DefaultReplicationWorker(lambda$getReplicationRunnable$5):301 - Records read: 35000 (38 MB)
2022-04-15 03:19:14 �[32mINFO�[m i.a.w.DefaultReplicationWorker(lambda$getReplicationRunnable$5):301 - Records read: 36000 (39 MB)
2022-04-15 03:19:16 �[32mINFO�[m i.a.w.DefaultReplicationWorker(lambda$getReplicationRunnable$5):301 - Records read: 37000 (40 MB)
2022-04-15 03:19:18 �[32mINFO�[m i.a.w.DefaultReplicationWorker(lambda$getReplicationRunnable$5):301 - Records read: 38000 (41 MB)
2022-04-15 03:19:20 �[32mINFO�[m i.a.w.DefaultReplicationWorker(lambda$getReplicationRunnable$5):301 - Records read: 39000 (42 MB)
2022-04-15 03:19:21 �[32mINFO�[m i.a.w.DefaultReplicationWorker(lambda$getReplicationRunnable$5):301 - Records read: 40000 (43 MB)
2022-04-15 03:19:23 �[32mINFO�[m i.a.w.DefaultReplicationWorker(lambda$getReplicationRunnable$5):301 - Records read: 41000 (45 MB)
2022-04-15 03:19:25 �[32mINFO�[m i.a.w.DefaultReplicationWorker(lambda$getReplicationRunnable$5):301 - Records read: 42000 (46 MB)
2022-04-15 03:19:27 �[32mINFO�[m i.a.w.DefaultReplicationWorker(lambda$getReplicationRunnable$5):301 - Records read: 43000 (47 MB)
2022-04-15 03:19:28 �[32mINFO�[m i.a.w.DefaultReplicationWorker(lambda$getReplicationRunnable$5):301 - Records read: 44000 (48 MB)
2022-04-15 03:19:30 �[32mINFO�[m i.a.w.DefaultReplicationWorker(lambda$getReplicationRunnable$5):301 - Records read: 45000 (49 MB)
2022-04-15 03:19:32 �[32mINFO�[m i.a.w.DefaultReplicationWorker(lambda$getReplicationRunnable$5):301 - Records read: 46000 (50 MB)
2022-04-15 03:19:34 �[32mINFO�[m i.a.w.DefaultReplicationWorker(lambda$getReplicationRunnable$5):301 - Records read: 47000 (51 MB)
2022-04-15 03:19:35 �[32mINFO�[m i.a.w.DefaultReplicationWorker(lambda$getReplicationRunnable$5):301 - Records read: 48000 (52 MB)
2022-04-15 03:19:37 �[32mINFO�[m i.a.w.DefaultReplicationWorker(lambda$getReplicationRunnable$5):301 - Records read: 49000 (53 MB)
2022-04-15 03:19:39 �[32mINFO�[m i.a.w.DefaultReplicationWorker(lambda$getReplicationRunnable$5):301 - Records read: 50000 (54 MB)
2022-04-15 03:19:41 �[32mINFO�[m i.a.w.DefaultReplicationWorker(lambda$getReplicationRunnable$5):301 - Records read: 51000 (56 MB)
2022-04-15 03:19:43 �[32mINFO�[m i.a.w.DefaultReplicationWorker(lambda$getReplicationRunnable$5):301 - Records read: 52000 (57 MB)
2022-04-15 03:19:45 �[32mINFO�[m i.a.w.DefaultReplicationWorker(lambda$getReplicationRunnable$5):301 - Records read: 53000 (58 MB)
2022-04-15 03:19:46 �[32mINFO�[m i.a.w.DefaultReplicationWorker(lambda$getReplicationRunnable$5):301 - Records read: 54000 (59 MB)
2022-04-15 03:19:48 �[32mINFO�[m i.a.w.DefaultReplicationWorker(lambda$getReplicationRunnable$5):301 - Records read: 55000 (60 MB)
2022-04-15 03:19:50 �[32mINFO�[m i.a.w.DefaultReplicationWorker(lambda$getReplicationRunnable$5):301 - Records read: 56000 (61 MB)
2022-04-15 03:19:52 �[32mINFO�[m i.a.w.DefaultReplicationWorker(lambda$getReplicationRunnable$5):301 - Records read: 57000 (62 MB)
2022-04-15 03:19:53 �[32mINFO�[m i.a.w.DefaultReplicationWorker(lambda$getReplicationRunnable$5):301 - Records read: 58000 (63 MB)
2022-04-15 03:19:55 �[32mINFO�[m i.a.w.DefaultReplicationWorker(lambda$getReplicationRunnable$5):301 - Records read: 59000 (64 MB)
2022-04-15 03:19:57 �[32mINFO�[m i.a.w.DefaultReplicationWorker(lambda$getReplicationRunnable$5):301 - Records read: 60000 (66 MB)
2022-04-15 03:19:59 �[32mINFO�[m i.a.w.DefaultReplicationWorker(lambda$getReplicationRunnable$5):301 - Records read: 61000 (67 MB)
2022-04-15 03:20:01 �[32mINFO�[m i.a.w.DefaultReplicationWorker(lambda$getReplicationRunnable$5):301 - Records read: 62000 (68 MB)
2022-04-15 03:20:02 �[32mINFO�[m i.a.w.DefaultReplicationWorker(lambda$getReplicationRunnable$5):301 - Records read: 63000 (69 MB)
2022-04-15 03:20:04 �[32mINFO�[m i.a.w.DefaultReplicationWorker(lambda$getReplicationRunnable$5):301 - Records read: 64000 (70 MB)
2022-04-15 03:20:06 �[32mINFO�[m i.a.w.DefaultReplicationWorker(lambda$getReplicationRunnable$5):301 - Records read: 65000 (71 MB)
2022-04-15 03:20:08 �[32mINFO�[m i.a.w.DefaultReplicationWorker(lambda$getReplicationRunnable$5):301 - Records read: 66000 (72 MB)
2022-04-15 03:20:10 �[32mINFO�[m i.a.w.DefaultReplicationWorker(lambda$getReplicationRunnable$5):301 - Records read: 67000 (73 MB)
2022-04-15 03:20:11 �[32mINFO�[m i.a.w.DefaultReplicationWorker(lambda$getReplicationRunnable$5):301 - Records read: 68000 (74 MB)
2022-04-15 03:20:13 �[32mINFO�[m i.a.w.DefaultReplicationWorker(lambda$getReplicationRunnable$5):301 - Records read: 69000 (75 MB)
2022-04-15 03:20:15 �[32mINFO�[m i.a.w.DefaultReplicationWorker(lambda$getReplicationRunnable$5):301 - Records read: 70000 (77 MB)
2022-04-15 03:20:17 �[32mINFO�[m i.a.w.DefaultReplicationWorker(lambda$getReplicationRunnable$5):301 - Records read: 71000 (78 MB)
2022-04-15 03:20:19 �[32mINFO�[m i.a.w.DefaultReplicationWorker(lambda$getReplicationRunnable$5):301 - Records read: 72000 (79 MB)
2022-04-15 03:20:21 �[32mINFO�[m i.a.w.DefaultReplicationWorker(lambda$getReplicationRunnable$5):301 - Records read: 73000 (80 MB)
2022-04-15 03:20:23 �[32mINFO�[m i.a.w.DefaultReplicationWorker(lambda$getReplicationRunnable$5):301 - Records read: 74000 (81 MB)
2022-04-15 03:20:24 �[32mINFO�[m i.a.w.DefaultReplicationWorker(lambda$getReplicationRunnable$5):301 - Records read: 75000 (82 MB)
2022-04-15 03:20:26 �[32mINFO�[m i.a.w.DefaultReplicationWorker(lambda$getReplicationRunnable$5):301 - Records read: 76000 (83 MB)
2022-04-15 03:20:28 �[32mINFO�[m i.a.w.DefaultReplicationWorker(lambda$getReplicationRunnable$5):301 - Records read: 77000 (84 MB)
2022-04-15 03:20:30 �[32mINFO�[m i.a.w.DefaultReplicationWorker(lambda$getReplicationRunnable$5):301 - Records read: 78000 (85 MB)
2022-04-15 03:20:32 �[32mINFO�[m i.a.w.DefaultReplicationWorker(lambda$getReplicationRunnable$5):301 - Records read: 79000 (87 MB)
2022-04-15 03:20:34 �[32mINFO�[m i.a.w.DefaultReplicationWorker(lambda$getReplicationRunnable$5):301 - Records read: 80000 (88 MB)
2022-04-15 03:20:35 �[32mINFO�[m i.a.w.DefaultReplicationWorker(lambda$getReplicationRunnable$5):301 - Records read: 81000 (89 MB)
2022-04-15 03:20:37 �[32mINFO�[m i.a.w.DefaultReplicationWorker(lambda$getReplicationRunnable$5):301 - Records read: 82000 (90 MB)
2022-04-15 03:20:39 �[32mINFO�[m i.a.w.DefaultReplicationWorker(lambda$getReplicationRunnable$5):301 - Records read: 83000 (91 MB)
2022-04-15 03:20:41 �[32mINFO�[m i.a.w.DefaultReplicationWorker(lambda$getReplicationRunnable$5):301 - Records read: 84000 (92 MB)
2022-04-15 03:20:43 �[32mINFO�[m i.a.w.DefaultReplicationWorker(lambda$getReplicationRunnable$5):301 - Records read: 85000 (93 MB)
2022-04-15 03:20:45 �[32mINFO�[m i.a.w.DefaultReplicationWorker(lambda$getReplicationRunnable$5):301 - Records read: 86000 (94 MB)
2022-04-15 03:20:47 �[32mINFO�[m i.a.w.DefaultReplicationWorker(lambda$getReplicationRunnable$5):301 - Records read: 87000 (95 MB)
2022-04-15 03:20:49 �[32mINFO�[m i.a.w.DefaultReplicationWorker(lambda$getReplicationRunnable$5):301 - Records read: 88000 (96 MB)
2022-04-15 03:20:50 �[32mINFO�[m i.a.w.DefaultReplicationWorker(lambda$getReplicationRunnable$5):301 - Records read: 89000 (98 MB)
2022-04-15 03:20:53 �[32mINFO�[m i.a.w.DefaultReplicationWorker(lambda$getReplicationRunnable$5):301 - Records read: 90000 (99 MB)
2022-04-15 03:20:55 �[32mINFO�[m i.a.w.DefaultReplicationWorker(lambda$getReplicationRunnable$5):301 - Records read: 91000 (100 MB)
2022-04-15 03:20:56 �[32mINFO�[m i.a.w.DefaultReplicationWorker(lambda$getReplicationRunnable$5):301 - Records read: 92000 (101 MB)
2022-04-15 03:20:58 �[32mINFO�[m i.a.w.DefaultReplicationWorker(lambda$getReplicationRunnable$5):301 - Records read: 93000 (102 MB)
2022-04-15 03:21:00 �[32mINFO�[m i.a.w.DefaultReplicationWorker(lambda$getReplicationRunnable$5):301 - Records read: 94000 (103 MB)
2022-04-15 03:21:02 �[32mINFO�[m i.a.w.DefaultReplicationWorker(lambda$getReplicationRunnable$5):301 - Records read: 95000 (104 MB)
2022-04-15 03:21:04 �[32mINFO�[m i.a.w.DefaultReplicationWorker(lambda$getReplicationRunnable$5):301 - Records read: 96000 (105 MB)
2022-04-15 03:21:06 �[32mINFO�[m i.a.w.DefaultReplicationWorker(lambda$getReplicationRunnable$5):301 - Records read: 97000 (106 MB)
2022-04-15 03:21:08 �[32mINFO�[m i.a.w.DefaultReplicationWorker(lambda$getReplicationRunnable$5):301 - Records read: 98000 (108 MB)
2022-04-15 03:21:10 �[32mINFO�[m i.a.w.DefaultReplicationWorker(lambda$getReplicationRunnable$5):301 - Records read: 99000 (109 MB)
2022-04-15 03:21:12 �[32mINFO�[m i.a.w.DefaultReplicationWorker(lambda$getReplicationRunnable$5):301 - Records read: 100000 (110 MB)
2022-04-15 03:21:12 �[44msource�[0m > Read 100000 records from users stream
2022-04-15 03:21:12 �[44msource�[0m > Finished syncing users
2022-04-15 03:21:12 �[44msource�[0m > SourceZendeskSupport runtimes:
Syncing stream users 0:02:59.647487
2022-04-15 03:21:12 �[44msource�[0m > Finished syncing SourceZendeskSupport
2022-04-15 03:21:14 �[32mINFO�[m i.a.w.DefaultReplicationWorker(lambda$getReplicationRunnable$5):305 - Total records read: 100000 (110 MB)
2022-04-15 03:21:14 �[32mINFO�[m i.a.w.DefaultReplicationWorker(run):163 - One of source or destination thread complete. Waiting on the other.
2022-04-15 03:21:14 �[43mdestination�[0m > 2022-04-15 03:21:14 �[32mINFO�[m i.a.i.b.FailureTrackingAirbyteMessageConsumer(close):65 - Airbyte message consumer: succeeded.
2022-04-15 03:21:14 �[43mdestination�[0m > 2022-04-15 03:21:14 �[32mINFO�[m i.a.i.d.b.BufferedStreamConsumer(close):170 - executing on success close procedure.
2022-04-15 03:21:14 �[43mdestination�[0m > 2022-04-15 03:21:14 �[32mINFO�[m i.a.i.d.r.SerializedBufferingStrategy(flushAll):92 - Flushing all 1 current buffers (8 MB in total)
2022-04-15 03:21:14 �[43mdestination�[0m > 2022-04-15 03:21:14 �[32mINFO�[m i.a.i.d.r.SerializedBufferingStrategy(lambda$flushAll$2):95 - Flushing buffer of stream test_users (8 MB)
2022-04-15 03:21:14 �[43mdestination�[0m > 2022-04-15 03:21:14 �[32mINFO�[m i.a.i.d.s.StagingConsumerFactory(lambda$flushBufferFunction$3):155 - Flushing buffer for stream test_users (8 MB) to staging
2022-04-15 03:21:14 �[43mdestination�[0m > 2022-04-15 03:21:14 �[32mINFO�[m i.a.i.d.r.BaseSerializedBuffer(flush):123 - Finished writing data to 04e4e4f9-82e2-4888-9b17-62f55ba215c418369885553524196817.csv.gz (8 MB)
2022-04-15 03:21:14 �[43mdestination�[0m > 2022-04-15 03:21:14 �[32mINFO�[m i.a.i.d.s.u.S3StreamTransferManagerHelper(getDefault):55 - PartSize arg is set to 10 MB
2022-04-15 03:21:14 �[43mdestination�[0m > 2022-04-15 03:21:14 �[32mINFO�[m a.m.s.StreamTransferManager(getMultiPartOutputStreams):329 - Initiated multipart upload to my-bucket/ZENDESK_SUPPORT_TEST_USERS/2022/04/15/03/99D32BF3-4E55-40A6-812B-EF567AC54A35/04e4e4f9-82e2-4888-9b17-62f55ba215c418369885553524196817.csv.gz with full ID gNT2UiDz7TZIAZvhO.MLPVAv.ANjur3TBsft90._Pty9IqjiWJvTh9vhsBH.bFDsMVlm.8OpCpYO8zSLnyT_sfs84rJ13Y39Hd8uGoIB_LjaWbwvmR3ky66x975Uf9Ak
2022-04-15 03:21:14 �[43mdestination�[0m > 2022-04-15 03:21:14 �[32mINFO�[m a.m.s.MultiPartOutputStream(close):158 - Called close() on [MultipartOutputStream for parts 1 - 10000]
2022-04-15 03:21:14 �[43mdestination�[0m > 2022-04-15 03:21:14 �[32mINFO�[m a.m.s.StreamTransferManager(uploadStreamPart):558 - [Manager uploading to my-bucket/ZENDESK_SUPPORT_TEST_USERS/2022/04/15/03/99D32BF3-4E55-40A6-812B-EF567AC54A35/04e4e4f9-82e2-4888-9b17-62f55ba215c418369885553524196817.csv.gz with id gNT2UiDz7...x975Uf9Ak]: Finished uploading [Part number 1 containing 8.03 MB]
2022-04-15 03:21:14 �[43mdestination�[0m > 2022-04-15 03:21:14 �[32mINFO�[m a.m.s.StreamTransferManager(complete):395 - [Manager uploading to my-bucket/ZENDESK_SUPPORT_TEST_USERS/2022/04/15/03/99D32BF3-4E55-40A6-812B-EF567AC54A35/04e4e4f9-82e2-4888-9b17-62f55ba215c418369885553524196817.csv.gz with id gNT2UiDz7...x975Uf9Ak]: Completed
2022-04-15 03:21:14 �[43mdestination�[0m > 2022-04-15 03:21:14 �[32mINFO�[m i.a.i.d.r.FileBuffer(deleteFile):78 - Deleting tempFile data 04e4e4f9-82e2-4888-9b17-62f55ba215c418369885553524196817.csv.gz
2022-04-15 03:21:14 �[43mdestination�[0m > 2022-04-15 03:21:14 �[32mINFO�[m i.a.i.d.r.SerializedBufferingStrategy(close):119 - Closing buffer for stream test_users
2022-04-15 03:21:14 �[43mdestination�[0m > 2022-04-15 03:21:14 �[32mINFO�[m i.a.i.d.s.StagingConsumerFactory(lambda$onCloseFunction$4):182 - Copying into tables in destination started for 1 streams
2022-04-15 03:21:14 �[43mdestination�[0m > 2022-04-15 03:21:14 �[32mINFO�[m i.a.i.d.s.StagingConsumerFactory(lambda$onCloseFunction$4):191 - Copying stream test_users of schema zendesk_support into tmp table _airbyte_tmp_bze_test_users to final table _airbyte_raw_test_users from stage path ZENDESK_SUPPORT_TEST_USERS/2022/04/15/03/99D32BF3-4E55-40A6-812B-EF567AC54A35/ with 1 file(s) [04e4e4f9-82e2-4888-9b17-62f55ba215c418369885553524196817.csv.gz]
2022-04-15 03:21:14 �[43mdestination�[0m > 2022-04-15 03:21:14 �[32mINFO�[m i.a.i.d.s.SnowflakeS3StagingSqlOperations(copyIntoTmpTableFromStage):88 - Starting copy to tmp table from stage: _airbyte_tmp_bze_test_users in destination from stage: ZENDESK_SUPPORT_TEST_USERS/2022/04/15/03/99D32BF3-4E55-40A6-812B-EF567AC54A35/, schema: zendesk_support, .
2022-04-15 03:21:25 �[43mdestination�[0m > 2022-04-15 03:21:25 �[32mINFO�[m i.a.i.d.s.SnowflakeS3StagingSqlOperations(copyIntoTmpTableFromStage):93 - Copy to tmp table zendesk_support._airbyte_tmp_bze_test_users in destination complete.
2022-04-15 03:21:25 �[43mdestination�[0m > 2022-04-15 03:21:25 �[32mINFO�[m i.a.i.d.s.StagingConsumerFactory(lambda$onCloseFunction$4):213 - Executing finalization of tables.
2022-04-15 03:21:32 �[43mdestination�[0m > 2022-04-15 03:21:32 �[32mINFO�[m i.a.i.d.s.StagingConsumerFactory(lambda$onCloseFunction$4):215 - Finalizing tables in destination completed.
2022-04-15 03:21:32 �[43mdestination�[0m > 2022-04-15 03:21:32 �[32mINFO�[m i.a.i.d.s.StagingConsumerFactory(lambda$onCloseFunction$4):217 - Cleaning up destination started for 1 streams
2022-04-15 03:21:32 �[43mdestination�[0m > 2022-04-15 03:21:32 �[32mINFO�[m i.a.i.d.s.StagingConsumerFactory(lambda$onCloseFunction$4):221 - Cleaning tmp table in destination started for stream test_users. schema zendesk_support, tmp table name: _airbyte_tmp_bze_test_users
2022-04-15 03:21:33 �[43mdestination�[0m > 2022-04-15 03:21:33 �[32mINFO�[m i.a.i.d.s.StagingConsumerFactory(lambda$onCloseFunction$4):226 - Cleaning stage in destination started for stream test_users. schema zendesk_support, stage: ZENDESK_SUPPORT_TEST_USERS
2022-04-15 03:21:33 �[43mdestination�[0m > 2022-04-15 03:21:33 �[32mINFO�[m i.a.i.d.s.S3StorageOperations(dropBucketObject):135 - Dropping bucket object ZENDESK_SUPPORT_TEST_USERS...
2022-04-15 03:21:33 �[43mdestination�[0m > 2022-04-15 03:21:33 �[32mINFO�[m i.a.i.d.s.S3StorageOperations(dropBucketObject):140 - Bucket object ZENDESK_SUPPORT_TEST_USERS has been deleted...
2022-04-15 03:21:33 �[43mdestination�[0m > 2022-04-15 03:21:33 �[32mINFO�[m i.a.i.d.s.StagingConsumerFactory(lambda$onCloseFunction$4):230 - Cleaning up destination completed.
2022-04-15 03:21:33 �[43mdestination�[0m > 2022-04-15 03:21:33 �[32mINFO�[m i.a.i.b.IntegrationRunner(runInternal):169 - Completed integration: io.airbyte.integrations.destination.snowflake.SnowflakeDestination
2022-04-15 03:21:36 �[32mINFO�[m i.a.w.DefaultReplicationWorker(run):165 - Source and destination threads complete.
2022-04-15 03:21:36 �[32mINFO�[m i.a.w.DefaultReplicationWorker(run):228 - sync summary: io.airbyte.config.ReplicationAttemptSummary@8770a0c[status=completed,recordsSynced=100000,bytesSynced=115550516,startTime=1649992688581,endTime=1649992896128,totalStats=io.airbyte.config.SyncStats@4827f257[recordsEmitted=100000,bytesEmitted=115550516,stateMessagesEmitted=0,recordsCommitted=100000],streamStats=[io.airbyte.config.StreamSyncStats@2f220b5a[streamName=test_users,stats=io.airbyte.config.SyncStats@f7ec6b3[recordsEmitted=100000,bytesEmitted=115550516,stateMessagesEmitted=<null>,recordsCommitted=100000]]]]
2022-04-15 03:21:36 �[32mINFO�[m i.a.w.DefaultReplicationWorker(run):250 - Source did not output any state messages
2022-04-15 03:21:36 �[33mWARN�[m i.a.w.DefaultReplicationWorker(run):261 - State capture: No state retained.
2022-04-15 03:21:36 �[32mINFO�[m i.a.w.t.TemporalAttemptExecution(get):131 - Stopping cancellation check scheduling...
2022-04-15 03:21:36 �[32mINFO�[m i.a.w.t.s.ReplicationActivityImpl(lambda$replicate$1):147 - sync summary: io.airbyte.config.StandardSyncOutput@598c119e[standardSyncSummary=io.airbyte.config.StandardSyncSummary@89b9618[status=completed,recordsSynced=100000,bytesSynced=115550516,startTime=1649992688581,endTime=1649992896128,totalStats=io.airbyte.config.SyncStats@4827f257[recordsEmitted=100000,bytesEmitted=115550516,stateMessagesEmitted=0,recordsCommitted=100000],streamStats=[io.airbyte.config.StreamSyncStats@2f220b5a[streamName=test_users,stats=io.airbyte.config.SyncStats@f7ec6b3[recordsEmitted=100000,bytesEmitted=115550516,stateMessagesEmitted=<null>,recordsCommitted=100000]]]],state=<null>,outputCatalog=io.airbyte.protocol.models.ConfiguredAirbyteCatalog@2195f165[streams=[io.airbyte.protocol.models.ConfiguredAirbyteStream@5947d709[stream=io.airbyte.protocol.models.AirbyteStream@607e3d5e[name=test_users,jsonSchema={"type":["null","object"],"properties":{"id":{"type":["null","integer"]},"url":{"type":["null","string"]},"name":{"type":["null","string"]},"role":{"type":["null","string"]},"tags":{"type":["null","array"],"items":{"type":["null","string"]}},"alias":{"type":["null","string"]},"email":{"type":["null","string"]},"notes":{"type":["null","string"]},"phone":{"type":["null","string"]},"photo":{"type":["null","object"],"properties":{"id":{"type":["null","integer"]},"url":{"type":["null","string"]},"size":{"type":["null","integer"]},"width":{"type":["null","integer"]},"height":{"type":["null","integer"]},"inline":{"type":["null","boolean"]},"file_name":{"type":["null","string"]},"thumbnails":{"type":["null","array"],"items":{"type":["null","object"],"properties":{"id":{"type":["null","integer"]},"url":{"type":["null","string"]},"size":{"type":["null","integer"]},"width":{"type":["null","integer"]},"height":{"type":["null","integer"]},"inline":{"type":["null","boolean"]},"file_name":{"type":["null","string"]},"content_url":{"type":["null","string"]},"content_type":{"type":["null","string"]},"mapped_content_url":{"type":["null","string"]}}}},"content_url":{"type":["null","string"]},"content_type":{"type":["null","string"]},"mapped_content_url":{"type":["null","string"]}}},"active":{"type":["null","boolean"]},"locale":{"type":["null","string"]},"shared":{"type":["null","boolean"]},"details":{"type":["null","string"]},"verified":{"type":["null","boolean"]},"chat_only":{"type":["null","boolean"]},"locale_id":{"type":["null","integer"]},"moderator":{"type":["null","boolean"]},"role_type":{"type":["null","integer"]},"signature":{"type":["null","string"]},"suspended":{"type":["null","boolean"]},"time_zone":{"type":["null","string"]},"created_at":{"type":["null","string"],"format":"date-time"},"report_csv":{"type":["null","boolean"]},"updated_at":{"type":["null","string"],"format":"date-time"},"external_id":{"type":["null","string"]},"user_fields":{"type":["null","object"],"additionalProperties":true},"shared_agent":{"type":["null","boolean"]},"last_login_at":{"type":["null","string"],"format":"date-time"},"custom_role_id":{"type":["null","integer"]},"organization_id":{"type":["null","integer"]},"default_group_id":{"type":["null","integer"]},"restricted_agent":{"type":["null","boolean"]},"ticket_restriction":{"type":["null","string"]},"permanently_deleted":{"type":["null","boolean"]},"shared_phone_number":{"type":["null","boolean"]},"only_private_comments":{"type":["null","boolean"]},"two_factor_auth_enabled":{"type":["null","boolean"]}}},supportedSyncModes=[full_refresh, incremental],sourceDefinedCursor=true,defaultCursorField=[updated_at],sourceDefinedPrimaryKey=[[id]],namespace=zendesk_support,additionalProperties={}],syncMode=full_refresh,cursorField=[updated_at],destinationSyncMode=overwrite,primaryKey=[[id]],additionalProperties={}]],additionalProperties={}],failures=[]]
2022-04-15 03:21:36 �[32mINFO�[m i.a.w.t.TemporalUtils(withBackgroundHeartbeat):235 - Stopping temporal heartbeating...
2022-04-15 03:21:36 �[33mWARN�[m i.a.s.p.JobNotifier(notifyJob):123 - Failed to successfully notify success: io.airbyte.config.Notification@10fab10a[notificationType=slack,sendOnSuccess=false,sendOnFailure=false,slackConfiguration=io.airbyte.config.SlackNotificationConfiguration@5cb3b993[webhook=https://hooks.slack.com/services/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],customerioConfiguration=<null>,additionalProperties={}]
2022-04-15 03:21:36 �[32mINFO�[m i.a.v.j.JsonSchemaValidator(test):56 - JSON schema validation failed.
errors: $.access_token: is missing but it is required, $.credentials: does not have a value in the enumeration [oauth2.0], $.credentials: must be a constant value oauth2.0
2022-04-15 03:21:36 �[32mINFO�[m i.a.v.j.JsonSchemaValidator(test):56 - JSON schema validation failed.
errors: $.part_size: is not defined in the schema and the schema does not allow additional properties, $.access_key_id: is not defined in the schema and the schema does not allow additional properties, $.s3_bucket_name: is not defined in the schema and the schema does not allow additional properties, $.s3_bucket_region: is not defined in the schema and the schema does not allow additional properties, $.secret_access_key: is not defined in the schema and the schema does not allow additional properties, $.purge_staging_data: is not defined in the schema and the schema does not allow additional properties, $.method: does not have a value in the enumeration [Standard]
2022-04-15 03:21:36 �[32mINFO�[m i.a.v.j.JsonSchemaValidator(test):56 - JSON schema validation failed.
errors: $.part_size: is not defined in the schema and the schema does not allow additional properties, $.access_key_id: is not defined in the schema and the schema does not allow additional properties, $.s3_bucket_name: is not defined in the schema and the schema does not allow additional properties, $.s3_bucket_region: is not defined in the schema and the schema does not allow additional properties, $.secret_access_key: is not defined in the schema and the schema does not allow additional properties, $.purge_staging_data: is not defined in the schema and the schema does not allow additional properties, $.method: does not have a value in the enumeration [Internal Staging]


Steps to Reproduce

  1. Configure connector including start date
  2. Sync

Are you willing to submit a PR?

No

@danieldiamond
Copy link
Contributor Author

I have reverted to zendesk support 0.1.12 and the sync appears syncing properly albeit much slower as it doesn't use the latest version requests configuration

2022-04-12 12:12:33 source > The rate limit of requests is exceeded. Waiting for 27 seconds.
2022-04-12 12:12:33 source > Backing off _send(...) for 0.0s (airbyte_cdk.sources.streams.http.exceptions.UserDefinedBackoffException)
2022-04-12 12:13:03 source > Retrying. Sleeping for 27 seconds
2022-04-12 12:13:03 source > Setting state of users stream to {'updated_at': '2016-01-12T10:26:52Z', '_last_end_time': 1452711416}
2022-04-12 12:13:03 INFO i.a.w.DefaultReplicationWorker(lambda$getReplicationRunnable$5):301 - Records read: 551000 (549 MB)

NOTE: in the latest version, the number of users is being capped at 100,000 in the earlier version this does not appear the be the case

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
Status: Backlog (unscoped)
Development

Successfully merging a pull request may close this issue.

4 participants