2024-07-09 05:00:32 platform > Retry State: RetryManager(completeFailureBackoffPolicy=BackoffPolicy(minInterval=PT10S, maxInterval=PT30M, base=3), partialFailureBackoffPolicy=null, successiveCompleteFailureLimit=5, totalCompleteFailureLimit=10, successivePartialFailureLimit=1000, totalPartialFailureLimit=10, successiveCompleteFailures=4, totalCompleteFailures=4, successivePartialFailures=0, totalPartialFailures=0) 2024-07-09 05:00:32 platform > Backing off for: 4 minutes 30 seconds. 2024-07-09 05:05:07 platform > Cloud storage job log path: /workspace/71651/4/logs.log 2024-07-09 05:05:23 INFO i.m.r.Micronaut(lambda$start$2):98 - Startup completed in 838ms. Server Running: http://orchestrator-repl-job-71651-attempt-4:9000 2024-07-09 05:05:24 replication-orchestrator > Writing async status INITIALIZING for KubePodInfo[namespace=airbyte, name=orchestrator-repl-job-71651-attempt-4, mainContainerInfo=KubeContainerInfo[image=airbyte/container-orchestrator:0.50.38, pullPolicy=IfNotPresent]]... 2024-07-09 05:05:03 platform > Cloud storage job log path: /workspace/71651/4/logs.log 2024-07-09 05:05:03 platform > Executing worker wrapper. Airbyte version: 0.50.38 2024-07-09 05:05:03 platform > Attempt 0 to save workflow id for cancellation 2024-07-09 05:05:03 platform > 2024-07-09 05:05:03 platform > Using default value for environment variable STATE_STORAGE_S3_ACCESS_KEY: '' 2024-07-09 05:05:03 platform > ----- START CHECK ----- 2024-07-09 05:05:03 platform > Using default value for environment variable STATE_STORAGE_S3_SECRET_ACCESS_KEY: '' 2024-07-09 05:05:03 platform > 2024-07-09 05:05:03 platform > Using default value for environment variable SIDECAR_KUBE_CPU_LIMIT: '2.0' 2024-07-09 05:05:03 platform > Using default value for environment variable SOCAT_KUBE_CPU_LIMIT: '2.0' 2024-07-09 05:05:03 platform > Using default value for environment variable SIDECAR_KUBE_CPU_REQUEST: '0.1' 2024-07-09 05:05:03 platform > Using default value for environment variable SOCAT_KUBE_CPU_REQUEST: '0.1' 2024-07-09 05:05:03 platform > Using default value for environment variable LAUNCHDARKLY_KEY: '' 2024-07-09 05:05:03 platform > Using default value for environment variable FEATURE_FLAG_CLIENT: '' 2024-07-09 05:05:03 platform > Using default value for environment variable OTEL_COLLECTOR_ENDPOINT: '' 2024-07-09 05:05:03 platform > Attempting to start pod = source-mongodb-v2-check-71651-4-asaky for airbyte/source-mongodb-v2:1.4.0 with resources ConnectorResourceRequirements[main=io.airbyte.config.ResourceRequirements@250ae96c[cpuRequest=2,cpuLimit=,memoryRequest=1Gi,memoryLimit=,additionalProperties={}], heartbeat=io.airbyte.config.ResourceRequirements@3fd0f365[cpuRequest=0.1,cpuLimit=2.0,memoryRequest=25Mi,memoryLimit=50Mi,additionalProperties={}], stdErr=io.airbyte.config.ResourceRequirements@40031ce5[cpuRequest=0.25,cpuLimit=2,memoryRequest=25Mi,memoryLimit=50Mi,additionalProperties={}], stdIn=io.airbyte.config.ResourceRequirements@2d48324d[cpuRequest=0.1,cpuLimit=2.0,memoryRequest=25Mi,memoryLimit=50Mi,additionalProperties={}], stdOut=io.airbyte.config.ResourceRequirements@2d48324d[cpuRequest=0.1,cpuLimit=2.0,memoryRequest=25Mi,memoryLimit=50Mi,additionalProperties={}]] and allowedHosts io.airbyte.config.AllowedHosts@5c3ceabb[hosts=[*.datadoghq.com, *.datadoghq.eu, *.sentry.io],additionalProperties={}] 2024-07-09 05:05:03 platform > source-mongodb-v2-check-71651-4-asaky stdoutLocalPort = 9016 2024-07-09 05:05:03 platform > source-mongodb-v2-check-71651-4-asaky stderrLocalPort = 9017 2024-07-09 05:05:03 platform > Creating stdout socket server... 2024-07-09 05:05:03 platform > Creating pod source-mongodb-v2-check-71651-4-asaky... 2024-07-09 05:05:03 platform > Creating stderr socket server... 2024-07-09 05:05:03 platform > Waiting for init container to be ready before copying files... 2024-07-09 05:05:04 platform > Init container ready.. 2024-07-09 05:05:04 platform > Copying files... 2024-07-09 05:05:04 platform > Uploading file: source_config.json 2024-07-09 05:05:04 platform > kubectl cp /tmp/60f33d2b-010c-49e0-bddf-92b5488871d8/source_config.json airbyte/source-mongodb-v2-check-71651-4-asaky:/config/source_config.json -c init 2024-07-09 05:05:04 platform > Waiting for kubectl cp to complete 2024-07-09 05:05:04 platform > kubectl cp complete, closing process 2024-07-09 05:05:04 platform > Uploading file: FINISHED_UPLOADING 2024-07-09 05:05:04 platform > kubectl cp /tmp/2173516d-1fdc-4dc8-b9db-6bbd9fae7193/FINISHED_UPLOADING airbyte/source-mongodb-v2-check-71651-4-asaky:/config/FINISHED_UPLOADING -c init 2024-07-09 05:05:04 platform > Waiting for kubectl cp to complete 2024-07-09 05:05:04 platform > kubectl cp complete, closing process 2024-07-09 05:05:04 platform > Waiting until pod is ready... 2024-07-09 05:05:05 platform > Setting stdout... 2024-07-09 05:05:05 platform > Setting stderr... 2024-07-09 05:05:06 platform > Reading pod IP... 2024-07-09 05:05:06 platform > Pod IP: 192.168.78.193 2024-07-09 05:05:06 platform > Using null stdin output stream... 2024-07-09 05:05:06 platform > Reading messages from protocol version 0.2.0 2024-07-09 05:05:06 platform > INFO main i.a.i.s.m.MongoDbSource(main):51 starting source: class io.airbyte.integrations.source.mongodb.MongoDbSource 2024-07-09 05:05:06 platform > INFO main i.a.c.i.b.IntegrationCliParser$Companion(parseOptions):144 integration args: {check=null, config=source_config.json} 2024-07-09 05:05:06 platform > INFO main i.a.c.i.b.IntegrationRunner(runInternal):124 Running integration: io.airbyte.integrations.source.mongodb.MongoDbSource 2024-07-09 05:05:06 platform > INFO main i.a.c.i.b.IntegrationRunner(runInternal):125 Command: CHECK 2024-07-09 05:05:06 platform > INFO main i.a.c.i.b.IntegrationRunner(runInternal):126 Integration config: IntegrationConfig{command=CHECK, configPath='source_config.json', catalogPath='null', statePath='null'} 2024-07-09 05:05:06 platform > WARN main c.n.s.JsonMetaSchema(newValidator):278 Unknown keyword groups - you should define your own Meta Schema. If the keyword is irrelevant for validation, just use a NonValidationKeyword 2024-07-09 05:05:06 platform > WARN main c.n.s.JsonMetaSchema(newValidator):278 Unknown keyword order - you should define your own Meta Schema. If the keyword is irrelevant for validation, just use a NonValidationKeyword 2024-07-09 05:05:06 platform > WARN main c.n.s.JsonMetaSchema(newValidator):278 Unknown keyword group - you should define your own Meta Schema. If the keyword is irrelevant for validation, just use a NonValidationKeyword 2024-07-09 05:05:06 platform > WARN main c.n.s.JsonMetaSchema(newValidator):278 Unknown keyword display_type - you should define your own Meta Schema. If the keyword is irrelevant for validation, just use a NonValidationKeyword 2024-07-09 05:05:06 platform > WARN main c.n.s.JsonMetaSchema(newValidator):278 Unknown keyword airbyte_secret - you should define your own Meta Schema. If the keyword is irrelevant for validation, just use a NonValidationKeyword 2024-07-09 05:05:06 platform > WARN main c.n.s.JsonMetaSchema(newValidator):278 Unknown keyword min - you should define your own Meta Schema. If the keyword is irrelevant for validation, just use a NonValidationKeyword 2024-07-09 05:05:06 platform > WARN main c.n.s.JsonMetaSchema(newValidator):278 Unknown keyword max - you should define your own Meta Schema. If the keyword is irrelevant for validation, just use a NonValidationKeyword 2024-07-09 05:05:06 platform > INFO cluster-ClusterId{value='668cc502afbb74269649afea', description='null'}-srv-finaloop-prod-pl-0.xnbzo.mongodb.net c.m.i.d.l.SLF4JLogger(info):71 Adding discovered server pl-0-us-east-1.xnbzo.mongodb.net:1026 to client view of cluster 2024-07-09 05:05:06 platform > INFO main c.m.i.d.l.SLF4JLogger(info):71 MongoClient with metadata {"driver": {"name": "mongo-java-driver|sync|Airbyte", "version": "4.11.0"}, "os": {"type": "Linux", "name": "Linux", "architecture": "amd64", "version": "5.10.219-208.866.amzn2.x86_64"}, "platform": "Java/Amazon.com Inc./21.0.3+9-LTS"} created with settings MongoClientSettings{readPreference=ReadPreference{name=secondaryPreferred, hedgeOptions=null}, writeConcern=WriteConcern{w=null, wTimeout=null ms, journal=null}, retryWrites=true, retryReads=true, readConcern=ReadConcern{level=null}, credential=MongoCredential{mechanism=null, userName='AIRBYTE_USER', source='admin', password=, mechanismProperties=}, transportSettings=null, streamFactoryFactory=null, commandListeners=[], codecRegistry=ProvidersCodecRegistry{codecProviders=[ValueCodecProvider{}, BsonValueCodecProvider{}, DBRefCodecProvider{}, DBObjectCodecProvider{}, DocumentCodecProvider{}, CollectionCodecProvider{}, IterableCodecProvider{}, MapCodecProvider{}, GeoJsonCodecProvider{}, GridFSFileCodecProvider{}, Jsr310CodecProvider{}, JsonObjectCodecProvider{}, BsonCodecProvider{}, EnumCodecProvider{}, com.mongodb.client.model.mql.ExpressionCodecProvider@56febdc, com.mongodb.Jep395RecordCodecProvider@3b8ee898, com.mongodb.KotlinCodecProvider@7d151a]}, loggerSettings=LoggerSettings{maxDocumentLength=1000}, clusterSettings={hosts=[127.0.0.1:27017], srvHost=finaloop-prod-pl-0.xnbzo.mongodb.net, srvServiceName=mongodb, mode=MULTIPLE, requiredClusterType=REPLICA_SET, requiredReplicaSetName='atlas-elry3t-shard-0', serverSelector='null', clusterListeners='[]', serverSelectionTimeout='30000 ms', localThreshold='15 ms'}, socketSettings=SocketSettings{connectTimeoutMS=10000, readTimeoutMS=0, receiveBufferSize=0, proxySettings=ProxySettings{host=null, port=null, username=null, password=null}}, heartbeatSocketSettings=SocketSettings{connectTimeoutMS=10000, readTimeoutMS=10000, receiveBufferSize=0, proxySettings=ProxySettings{host=null, port=null, username=null, password=null}}, connectionPoolSettings=ConnectionPoolSettings{maxSize=100, minSize=0, maxWaitTimeMS=120000, maxConnectionLifeTimeMS=0, maxConnectionIdleTimeMS=0, maintenanceInitialDelayMS=0, maintenanceFrequencyMS=60000, connectionPoolListeners=[], maxConnecting=2}, serverSettings=ServerSettings{heartbeatFrequencyMS=10000, minHeartbeatFrequencyMS=500, serverListeners='[]', serverMonitorListeners='[]'}, sslSettings=SslSettings{enabled=true, invalidHostNameAllowed=false, context=null}, applicationName='null', compressorList=[], uuidRepresentation=UNSPECIFIED, serverApi=null, autoEncryptionSettings=null, dnsClient=null, inetAddressResolver=null, contextProvider=null} 2024-07-09 05:05:06 platform > INFO cluster-ClusterId{value='668cc502afbb74269649afea', description='null'}-srv-finaloop-prod-pl-0.xnbzo.mongodb.net c.m.i.d.l.SLF4JLogger(info):71 Adding discovered server pl-0-us-east-1.xnbzo.mongodb.net:1025 to client view of cluster 2024-07-09 05:05:06 platform > INFO cluster-ClusterId{value='668cc502afbb74269649afea', description='null'}-srv-finaloop-prod-pl-0.xnbzo.mongodb.net c.m.i.d.l.SLF4JLogger(info):71 Adding discovered server pl-0-us-east-1.xnbzo.mongodb.net:1024 to client view of cluster 2024-07-09 05:05:06 platform > INFO main c.m.i.d.l.SLF4JLogger(info):71 No server chosen by ReadPreferenceServerSelector{readPreference=primary} from cluster description ClusterDescription{type=REPLICA_SET, connectionMode=MULTIPLE, serverDescriptions=[ServerDescription{address=pl-0-us-east-1.xnbzo.mongodb.net:1026, type=UNKNOWN, state=CONNECTING}, ServerDescription{address=pl-0-us-east-1.xnbzo.mongodb.net:1025, type=UNKNOWN, state=CONNECTING}, ServerDescription{address=pl-0-us-east-1.xnbzo.mongodb.net:1024, type=UNKNOWN, state=CONNECTING}]}. Waiting for 30000 ms before timing out 2024-07-09 05:05:06 platform > INFO cluster-ClusterId{value='668cc502afbb74269649afea', description='null'}-pl-0-us-east-1.xnbzo.mongodb.net:1026 c.m.i.d.l.SLF4JLogger(info):71 Monitor thread successfully connected to server with description ServerDescription{address=pl-0-us-east-1.xnbzo.mongodb.net:1026, type=REPLICA_SET_SECONDARY, state=CONNECTED, ok=true, minWireVersion=0, maxWireVersion=17, maxDocumentSize=16777216, logicalSessionTimeoutMinutes=30, roundTripTimeNanos=203917683, setName='atlas-elry3t-shard-0', canonicalAddress=pl-0-us-east-1.xnbzo.mongodb.net:1026, hosts=[pl-0-us-east-1.xnbzo.mongodb.net:1025, pl-0-us-east-1.xnbzo.mongodb.net:1026, pl-0-us-east-1.xnbzo.mongodb.net:1024], passives=[], arbiters=[], primary='pl-0-us-east-1.xnbzo.mongodb.net:1025', tagSet=TagSet{[Tag{name='availabilityZone', value='use1-az5'}, Tag{name='diskState', value='READY'}, Tag{name='nodeType', value='ELECTABLE'}, Tag{name='provider', value='AWS'}, Tag{name='region', value='US_EAST_1'}, Tag{name='workloadType', value='OPERATIONAL'}]}, electionId=null, setVersion=4, topologyVersion=TopologyVersion{processId=668016ba8d3144b6ca265470, counter=3}, lastWriteDate=Tue Jul 09 05:05:05 UTC 2024, lastUpdateTimeNanos=24959710168366} 2024-07-09 05:05:06 platform > INFO cluster-ClusterId{value='668cc502afbb74269649afea', description='null'}-pl-0-us-east-1.xnbzo.mongodb.net:1025 c.m.i.d.l.SLF4JLogger(info):71 Monitor thread successfully connected to server with description ServerDescription{address=pl-0-us-east-1.xnbzo.mongodb.net:1025, type=REPLICA_SET_PRIMARY, state=CONNECTED, ok=true, minWireVersion=0, maxWireVersion=17, maxDocumentSize=16777216, logicalSessionTimeoutMinutes=30, roundTripTimeNanos=203917173, setName='atlas-elry3t-shard-0', canonicalAddress=pl-0-us-east-1.xnbzo.mongodb.net:1025, hosts=[pl-0-us-east-1.xnbzo.mongodb.net:1025, pl-0-us-east-1.xnbzo.mongodb.net:1026, pl-0-us-east-1.xnbzo.mongodb.net:1024], passives=[], arbiters=[], primary='pl-0-us-east-1.xnbzo.mongodb.net:1025', tagSet=TagSet{[Tag{name='availabilityZone', value='use1-az4'}, Tag{name='diskState', value='READY'}, Tag{name='nodeType', value='ELECTABLE'}, Tag{name='provider', value='AWS'}, Tag{name='region', value='US_EAST_1'}, Tag{name='workloadType', value='OPERATIONAL'}]}, electionId=7fffffff0000000000000070, setVersion=4, topologyVersion=TopologyVersion{processId=66801612bcf2cbedf3003b29, counter=6}, lastWriteDate=Tue Jul 09 05:05:05 UTC 2024, lastUpdateTimeNanos=24959710168976} 2024-07-09 05:05:06 platform > INFO cluster-ClusterId{value='668cc502afbb74269649afea', description='null'}-pl-0-us-east-1.xnbzo.mongodb.net:1024 c.m.i.d.l.SLF4JLogger(info):71 Monitor thread successfully connected to server with description ServerDescription{address=pl-0-us-east-1.xnbzo.mongodb.net:1024, type=REPLICA_SET_SECONDARY, state=CONNECTED, ok=true, minWireVersion=0, maxWireVersion=17, maxDocumentSize=16777216, logicalSessionTimeoutMinutes=30, roundTripTimeNanos=203915223, setName='atlas-elry3t-shard-0', canonicalAddress=pl-0-us-east-1.xnbzo.mongodb.net:1024, hosts=[pl-0-us-east-1.xnbzo.mongodb.net:1025, pl-0-us-east-1.xnbzo.mongodb.net:1026, pl-0-us-east-1.xnbzo.mongodb.net:1024], passives=[], arbiters=[], primary='pl-0-us-east-1.xnbzo.mongodb.net:1025', tagSet=TagSet{[Tag{name='availabilityZone', value='use1-az6'}, Tag{name='diskState', value='READY'}, Tag{name='nodeType', value='ELECTABLE'}, Tag{name='provider', value='AWS'}, Tag{name='region', value='US_EAST_1'}, Tag{name='workloadType', value='OPERATIONAL'}]}, electionId=null, setVersion=4, topologyVersion=TopologyVersion{processId=668015533bce4fadbd70ae1c, counter=4}, lastWriteDate=Tue Jul 09 05:05:05 UTC 2024, lastUpdateTimeNanos=24959710168466} 2024-07-09 05:05:06 platform > INFO cluster-ClusterId{value='668cc502afbb74269649afea', description='null'}-pl-0-us-east-1.xnbzo.mongodb.net:1025 c.m.i.d.l.SLF4JLogger(info):71 Discovered replica set primary pl-0-us-east-1.xnbzo.mongodb.net:1025 with max election id 7fffffff0000000000000070 and max set version 4 2024-07-09 05:05:06 platform > INFO main i.a.i.s.m.MongoDbSource(check):97 The source passed the check operation test! 2024-07-09 05:05:06 platform > INFO main i.a.c.i.b.IntegrationRunner(runInternal):268 Completed integration: io.airbyte.integrations.source.mongodb.MongoDbSource 2024-07-09 05:05:06 platform > INFO main i.a.i.s.m.MongoDbSource(main):53 completed source: class io.airbyte.integrations.source.mongodb.MongoDbSource 2024-07-09 05:05:07 platform > (pod: airbyte / source-mongodb-v2-check-71651-4-asaky) - Closed all resources for pod 2024-07-09 05:05:07 platform > Check connection job received output: io.airbyte.config.StandardCheckConnectionOutput@3d0c2573[status=succeeded,message=,additionalProperties={}] 2024-07-09 05:05:07 platform > 2024-07-09 05:05:07 platform > ----- END CHECK ----- 2024-07-09 05:05:07 platform > 2024-07-09 05:05:07 platform > Executing worker wrapper. Airbyte version: 0.50.38 2024-07-09 05:05:07 platform > Attempt 0 to save workflow id for cancellation 2024-07-09 05:05:07 platform > Using default value for environment variable STATE_STORAGE_S3_ACCESS_KEY: '' 2024-07-09 05:05:07 platform > Using default value for environment variable STATE_STORAGE_S3_SECRET_ACCESS_KEY: '' 2024-07-09 05:05:07 platform > Using default value for environment variable SIDECAR_KUBE_CPU_LIMIT: '2.0' 2024-07-09 05:05:07 platform > Using default value for environment variable SOCAT_KUBE_CPU_LIMIT: '2.0' 2024-07-09 05:05:07 platform > 2024-07-09 05:05:07 platform > ----- START CHECK ----- 2024-07-09 05:05:07 platform > 2024-07-09 05:05:07 platform > Using default value for environment variable SIDECAR_KUBE_CPU_REQUEST: '0.1' 2024-07-09 05:05:07 platform > Using default value for environment variable SOCAT_KUBE_CPU_REQUEST: '0.1' 2024-07-09 05:05:07 platform > Using default value for environment variable LAUNCHDARKLY_KEY: '' 2024-07-09 05:05:07 platform > Using default value for environment variable FEATURE_FLAG_CLIENT: '' 2024-07-09 05:05:07 platform > Using default value for environment variable OTEL_COLLECTOR_ENDPOINT: '' 2024-07-09 05:05:07 platform > Attempting to start pod = destination-snowflake-check-71651-4-bfkxo for airbyte/destination-snowflake:3.10.1 with resources ConnectorResourceRequirements[main=io.airbyte.config.ResourceRequirements@7cd28811[cpuRequest=2,cpuLimit=,memoryRequest=3Gi,memoryLimit=,additionalProperties={}], heartbeat=io.airbyte.config.ResourceRequirements@4f92943d[cpuRequest=0.1,cpuLimit=2.0,memoryRequest=25Mi,memoryLimit=50Mi,additionalProperties={}], stdErr=io.airbyte.config.ResourceRequirements@aba7a31[cpuRequest=0.25,cpuLimit=2,memoryRequest=25Mi,memoryLimit=50Mi,additionalProperties={}], stdIn=io.airbyte.config.ResourceRequirements@6fed5830[cpuRequest=0.1,cpuLimit=2.0,memoryRequest=25Mi,memoryLimit=50Mi,additionalProperties={}], stdOut=io.airbyte.config.ResourceRequirements@6fed5830[cpuRequest=0.1,cpuLimit=2.0,memoryRequest=25Mi,memoryLimit=50Mi,additionalProperties={}]] and allowedHosts null 2024-07-09 05:05:07 platform > destination-snowflake-check-71651-4-bfkxo stdoutLocalPort = 9032 2024-07-09 05:05:07 platform > destination-snowflake-check-71651-4-bfkxo stderrLocalPort = 9033 2024-07-09 05:05:07 platform > Creating stdout socket server... 2024-07-09 05:05:07 platform > Creating pod destination-snowflake-check-71651-4-bfkxo... 2024-07-09 05:05:07 platform > Creating stderr socket server... 2024-07-09 05:05:07 platform > Waiting for init container to be ready before copying files... 2024-07-09 05:05:09 platform > Init container ready.. 2024-07-09 05:05:09 platform > Copying files... 2024-07-09 05:05:09 platform > Uploading file: source_config.json 2024-07-09 05:05:09 platform > kubectl cp /tmp/a7e6894e-03ae-4853-93ed-807e697362bf/source_config.json airbyte/destination-snowflake-check-71651-4-bfkxo:/config/source_config.json -c init 2024-07-09 05:05:09 platform > Waiting for kubectl cp to complete 2024-07-09 05:05:09 platform > kubectl cp complete, closing process 2024-07-09 05:05:09 platform > Uploading file: FINISHED_UPLOADING 2024-07-09 05:05:09 platform > kubectl cp /tmp/4a7537ba-5b14-4578-be45-74e56d3d1a63/FINISHED_UPLOADING airbyte/destination-snowflake-check-71651-4-bfkxo:/config/FINISHED_UPLOADING -c init 2024-07-09 05:05:09 platform > Waiting for kubectl cp to complete 2024-07-09 05:05:09 platform > kubectl cp complete, closing process 2024-07-09 05:05:09 platform > Waiting until pod is ready... 2024-07-09 05:05:10 platform > Setting stdout... 2024-07-09 05:05:10 platform > Setting stderr... 2024-07-09 05:05:11 platform > Reading pod IP... 2024-07-09 05:05:11 platform > Pod IP: 192.168.95.9 2024-07-09 05:05:11 platform > Using null stdin output stream... 2024-07-09 05:05:11 platform > Reading messages from protocol version 0.2.0 2024-07-09 05:05:11 platform > INFO main i.a.c.i.b.a.AdaptiveDestinationRunner$Runner(getDestination):54 Running destination under deployment mode: OSS 2024-07-09 05:05:11 platform > INFO main i.a.c.i.b.a.AdaptiveDestinationRunner$Runner(run):67 Starting destination: io.airbyte.integrations.destination.snowflake.SnowflakeDestination 2024-07-09 05:05:11 platform > INFO main i.a.c.i.b.IntegrationCliParser$Companion(parseOptions):144 integration args: {check=null, config=source_config.json} 2024-07-09 05:05:11 platform > INFO main i.a.c.i.b.IntegrationRunner(runInternal):124 Running integration: io.airbyte.integrations.destination.snowflake.SnowflakeDestination 2024-07-09 05:05:11 platform > INFO main i.a.c.i.b.IntegrationRunner(runInternal):125 Command: CHECK 2024-07-09 05:05:11 platform > INFO main i.a.c.i.b.IntegrationRunner(runInternal):126 Integration config: IntegrationConfig{command=CHECK, configPath='source_config.json', catalogPath='null', statePath='null'} 2024-07-09 05:05:11 platform > WARN main c.n.s.JsonMetaSchema(newValidator):278 Unknown keyword pattern_descriptor - you should define your own Meta Schema. If the keyword is irrelevant for validation, just use a NonValidationKeyword 2024-07-09 05:05:11 platform > WARN main c.n.s.JsonMetaSchema(newValidator):278 Unknown keyword order - you should define your own Meta Schema. If the keyword is irrelevant for validation, just use a NonValidationKeyword 2024-07-09 05:05:11 platform > WARN main c.n.s.JsonMetaSchema(newValidator):278 Unknown keyword airbyte_secret - you should define your own Meta Schema. If the keyword is irrelevant for validation, just use a NonValidationKeyword 2024-07-09 05:05:11 platform > WARN main c.n.s.JsonMetaSchema(newValidator):278 Unknown keyword airbyte_hidden - you should define your own Meta Schema. If the keyword is irrelevant for validation, just use a NonValidationKeyword 2024-07-09 05:05:11 platform > INFO main c.z.h.HikariDataSource(getConnection):109 HikariPool-1 - Starting... 2024-07-09 05:05:12 platform > INFO main c.z.h.p.HikariPool(checkFailFast):554 HikariPool-1 - Added connection net.snowflake.client.jdbc.SnowflakeConnectionV1@78ec89a6 2024-07-09 05:05:12 platform > INFO main c.z.h.HikariDataSource(getConnection):122 HikariPool-1 - Start completed. 2024-07-09 05:05:12 platform > INFO main i.a.c.d.j.DefaultJdbcDatabase(unsafeQuery$lambda$6):126 closing connection 2024-07-09 05:05:12 platform > INFO main i.a.c.d.j.DefaultJdbcDatabase(unsafeQuery$lambda$6):126 closing connection 2024-07-09 05:05:12 platform > INFO main i.a.i.d.s.t.SnowflakeDestinationHandler(execute):230 Executing sql 03230356-18ae-46c3-8673-da054aa39ffe-351de95d-b58b-4d6d-92cf-5184782902e4: CREATE TABLE IF NOT EXISTS "airbyte_internal"."STAGING_raw__stream__airbyte_connection_test_1b043ad05edf4411bea7f40d5faa52df"( "_airbyte_raw_id" VARCHAR PRIMARY KEY, "_airbyte_extracted_at" TIMESTAMP WITH TIME ZONE DEFAULT current_timestamp(), "_airbyte_loaded_at" TIMESTAMP WITH TIME ZONE DEFAULT NULL, "_airbyte_data" VARIANT, "_airbyte_meta" VARIANT DEFAULT NULL, "_airbyte_generation_id" INTEGER DEFAULT NULL ) data_retention_time_in_days = 1; 2024-07-09 05:05:13 platform > INFO main i.a.i.d.s.t.SnowflakeDestinationHandler(execute):252 Sql 03230356-18ae-46c3-8673-da054aa39ffe-351de95d-b58b-4d6d-92cf-5184782902e4 completed in 268 ms 2024-07-09 05:05:13 platform > INFO main i.a.i.d.s.t.SnowflakeDestinationHandler(execute):230 Executing sql 5b6cc679-850f-4f07-b71f-2a152a75f0e5-36a39b3b-96a9-447b-80b8-4082f9a96f4b: TRUNCATE TABLE "airbyte_internal"."STAGING_raw__stream__airbyte_connection_test_1b043ad05edf4411bea7f40d5faa52df"; 2024-07-09 05:05:13 platform > INFO main i.a.i.d.s.t.SnowflakeDestinationHandler(execute):252 Sql 5b6cc679-850f-4f07-b71f-2a152a75f0e5-36a39b3b-96a9-447b-80b8-4082f9a96f4b completed in 183 ms 2024-07-09 05:05:13 platform > INFO main i.a.i.b.d.o.AbstractStreamOperation():44 Typing and deduping disabled, skipping final table initialization 2024-07-09 05:05:13 platform > INFO main i.a.c.i.d.r.BaseSerializedBuffer(flush):162 Finished writing data to 1b315bc6-3de1-4f2d-990f-6771923ea58215351934439573004504.csv.gz (123 bytes) 2024-07-09 05:05:13 platform > INFO main i.a.c.i.d.s.o.StagingStreamOperations(writeRecords):50 Buffer flush complete for stream _airbyte_connection_test_1b043ad05edf4411bea7f40d5faa52df (123 bytes) to staging 2024-07-09 05:05:13 platform > INFO main i.a.i.d.s.o.SnowflakeStagingClient(uploadRecordsToBucket):84 executing query 33551668-d548-449e-b187-611ff79cd952, PUT file:///tmp/1b315bc6-3de1-4f2d-990f-6771923ea58215351934439573004504.csv.gz @"airbyte_internal"."STAGING_raw__stream__airbyte_connection_test_1b043ad05edf4411bea7f40d5faa52df"/2024/07/09/05/70E3BEEB-EF8D-4FDA-BD05-7EDEDEC948FE/ PARALLEL = 4; 2024-07-09 05:05:14 platform > INFO main i.a.c.d.j.DefaultJdbcDatabase(unsafeQuery$lambda$6):126 closing connection 2024-07-09 05:05:14 platform > INFO main i.a.i.d.s.o.SnowflakeStagingClient(uploadRecordsToBucket):94 query 33551668-d548-449e-b187-611ff79cd952, completed with [{"source":"1b315bc6-3de1-4f2d-990f-6771923ea58215351934439573004504.csv.gz","target":"1b315bc6-3de1-4f2d-990f-6771923ea58215351934439573004504.csv.gz","source_size":123,"target_size":123,"source_compression":"GZIP","target_compression":"GZIP","status":"UPLOADED","encryption":"","message":""}] 2024-07-09 05:05:14 platform > INFO main i.a.c.d.j.DefaultJdbcDatabase(unsafeQuery$lambda$6):126 closing connection 2024-07-09 05:05:14 platform > INFO main i.a.i.d.s.o.SnowflakeStagingClient(uploadRecordsToStage):70 Successfully loaded records to stage 2024/07/09/05/70E3BEEB-EF8D-4FDA-BD05-7EDEDEC948FE/ with 0 re-attempt(s) 2024-07-09 05:05:14 platform > INFO main i.a.i.d.s.o.SnowflakeStagingClient(copyIntoTableFromStage):177 query 9f3d0a77-5002-478d-b847-bac56be9eee8, COPY INTO "airbyte_internal"."STAGING_raw__stream__airbyte_connection_test_1b043ad05edf4411bea7f40d5faa52df" FROM '@"airbyte_internal"."STAGING_raw__stream__airbyte_connection_test_1b043ad05edf4411bea7f40d5faa52df"/2024/07/09/05/70E3BEEB-EF8D-4FDA-BD05-7EDEDEC948FE/' file_format = ( type = csv compression = auto field_delimiter = ',' skip_header = 0 FIELD_OPTIONALLY_ENCLOSED_BY = '"' NULL_IF=('') ) files = ('1b315bc6-3de1-4f2d-990f-6771923ea58215351934439573004504.csv.gz'); 2024-07-09 05:05:15 platform > INFO main i.a.c.d.j.DefaultJdbcDatabase(unsafeQuery$lambda$6):126 closing connection 2024-07-09 05:05:15 platform > INFO main i.a.i.d.s.o.SnowflakeStagingClient(copyIntoTableFromStage):186 query 9f3d0a77-5002-478d-b847-bac56be9eee8, successfully loaded 1 rows of data into table 2024-07-09 05:05:15 platform > INFO main i.a.c.i.d.r.FileBuffer(deleteFile):75 Deleting tempFile data 1b315bc6-3de1-4f2d-990f-6771923ea58215351934439573004504.csv.gz 2024-07-09 05:05:15 platform > INFO main i.a.i.d.s.o.SnowflakeStorageOperation(cleanupStage):78 Cleaning up stage "airbyte_internal"."STAGING_raw__stream__airbyte_connection_test_1b043ad05edf4411bea7f40d5faa52df" 2024-07-09 05:05:15 platform > INFO main i.a.i.b.d.o.AbstractStreamOperation(finalizeTable):127 Typing and deduping disabled, skipping final table finalization. Raw records can be found at airbyte_internal.STAGING_raw__stream__airbyte_connection_test_1b043ad05edf4411bea7f40d5faa52df 2024-07-09 05:05:15 platform > INFO main i.a.i.d.s.t.SnowflakeDestinationHandler(execute):230 Executing sql 92f63659-548e-431e-a114-051527586bec-2628be63-e17f-44ad-b13b-688092dc7343: DROP TABLE IF EXISTS "airbyte_internal"."STAGING_raw__stream__airbyte_connection_test_1b043ad05edf4411bea7f40d5faa52df"; 2024-07-09 05:05:15 platform > INFO main i.a.i.d.s.t.SnowflakeDestinationHandler(execute):252 Sql 92f63659-548e-431e-a114-051527586bec-2628be63-e17f-44ad-b13b-688092dc7343 completed in 116 ms 2024-07-09 05:05:15 platform > INFO main c.z.h.HikariDataSource(close):349 HikariPool-1 - Shutdown initiated... 2024-07-09 05:05:16 platform > INFO main c.z.h.HikariDataSource(close):351 HikariPool-1 - Shutdown completed. 2024-07-09 05:05:16 platform > INFO main i.a.c.i.b.IntegrationRunner(runInternal):268 Completed integration: io.airbyte.integrations.destination.snowflake.SnowflakeDestination 2024-07-09 05:05:16 platform > INFO main i.a.c.i.b.a.AdaptiveDestinationRunner$Runner(run):69 Completed destination: io.airbyte.integrations.destination.snowflake.SnowflakeDestination 2024-07-09 05:05:17 platform > (pod: airbyte / destination-snowflake-check-71651-4-bfkxo) - Closed all resources for pod 2024-07-09 05:05:17 platform > Check connection job received output: io.airbyte.config.StandardCheckConnectionOutput@10bc2baa[status=succeeded,message=,additionalProperties={}] 2024-07-09 05:05:17 platform > 2024-07-09 05:05:17 platform > ----- END CHECK ----- 2024-07-09 05:05:17 platform > 2024-07-09 05:05:18 platform > Cloud storage job log path: /workspace/71651/4/logs.log 2024-07-09 05:05:18 platform > Executing worker wrapper. Airbyte version: 0.50.38 2024-07-09 05:05:18 platform > Attempt 0 to save workflow id for cancellation 2024-07-09 05:05:18 platform > Creating orchestrator-repl-job-71651-attempt-4 for attempt number: 4 2024-07-09 05:05:18 platform > Successfully deleted all running pods for the connection! 2024-07-09 05:05:18 platform > Waiting for pod to be running... 2024-07-09 05:05:18 platform > Pod airbyte/orchestrator-repl-job-71651-attempt-4 is running on 192.168.79.128 2024-07-09 05:05:18 platform > Uploading file: envMap.json 2024-07-09 05:05:18 platform > kubectl cp /tmp/5494a2a9-79e1-4310-acdf-86ae0d1405fc/envMap.json airbyte/orchestrator-repl-job-71651-attempt-4:/config/envMap.json -c init 2024-07-09 05:05:18 platform > Waiting for kubectl cp to complete 2024-07-09 05:05:18 platform > kubectl cp complete, closing process 2024-07-09 05:05:18 platform > Uploading file: application.txt 2024-07-09 05:05:18 platform > kubectl cp /tmp/b43bd1d8-82f7-4460-8da3-d2e22ce46901/application.txt airbyte/orchestrator-repl-job-71651-attempt-4:/config/application.txt -c init 2024-07-09 05:05:18 platform > Waiting for kubectl cp to complete 2024-07-09 05:05:19 platform > kubectl cp complete, closing process 2024-07-09 05:05:19 platform > Uploading file: jobRunConfig.json 2024-07-09 05:05:19 platform > kubectl cp /tmp/b5a2b273-2be0-4275-8013-e3a66eb14ffe/jobRunConfig.json airbyte/orchestrator-repl-job-71651-attempt-4:/config/jobRunConfig.json -c init 2024-07-09 05:05:19 platform > Waiting for kubectl cp to complete 2024-07-09 05:05:19 platform > kubectl cp complete, closing process 2024-07-09 05:05:19 platform > Uploading file: destinationLauncherConfig.json 2024-07-09 05:05:19 platform > kubectl cp /tmp/8bc1bbe5-1c7c-444e-942f-8c581c76095f/destinationLauncherConfig.json airbyte/orchestrator-repl-job-71651-attempt-4:/config/destinationLauncherConfig.json -c init 2024-07-09 05:05:19 platform > Waiting for kubectl cp to complete 2024-07-09 05:05:19 platform > kubectl cp complete, closing process 2024-07-09 05:05:19 platform > Uploading file: sourceLauncherConfig.json 2024-07-09 05:05:19 platform > kubectl cp /tmp/3c5a3592-c894-4241-9ed9-2fb2895d14e8/sourceLauncherConfig.json airbyte/orchestrator-repl-job-71651-attempt-4:/config/sourceLauncherConfig.json -c init 2024-07-09 05:05:19 platform > Waiting for kubectl cp to complete 2024-07-09 05:05:19 platform > kubectl cp complete, closing process 2024-07-09 05:05:19 platform > Uploading file: input.json 2024-07-09 05:05:19 platform > kubectl cp /tmp/bdad61f4-6f81-47fa-90ee-edac0d18c4b3/input.json airbyte/orchestrator-repl-job-71651-attempt-4:/config/input.json -c init 2024-07-09 05:05:19 platform > Waiting for kubectl cp to complete 2024-07-09 05:05:19 platform > kubectl cp complete, closing process 2024-07-09 05:05:19 platform > Uploading file: KUBE_POD_INFO 2024-07-09 05:05:19 platform > kubectl cp /tmp/d37324d2-ed60-44da-9204-1827fa506564/KUBE_POD_INFO airbyte/orchestrator-repl-job-71651-attempt-4:/config/KUBE_POD_INFO -c init 2024-07-09 05:05:19 platform > Waiting for kubectl cp to complete 2024-07-09 05:05:19 platform > kubectl cp complete, closing process 2024-07-09 05:05:19 platform > Uploading file: FINISHED_UPLOADING 2024-07-09 05:05:19 platform > kubectl cp /tmp/9cd285b7-4797-4a00-8889-bdcd61b466aa/FINISHED_UPLOADING airbyte/orchestrator-repl-job-71651-attempt-4:/config/FINISHED_UPLOADING -c init 2024-07-09 05:05:19 platform > Waiting for kubectl cp to complete 2024-07-09 05:05:19 platform > kubectl cp complete, closing process 2024-07-09 05:05:23 INFO i.a.f.ConfigFileClient():105 - path /flags does not exist, will return default flag values 2024-07-09 05:05:24 INFO i.a.c.EnvConfigs(getEnvOrDefault):1145 - Using default value for environment variable STATE_STORAGE_S3_ACCESS_KEY: '' 2024-07-09 05:05:24 INFO i.a.c.EnvConfigs(getEnvOrDefault):1145 - Using default value for environment variable STATE_STORAGE_S3_SECRET_ACCESS_KEY: '' 2024-07-09 05:05:24 INFO i.a.c.EnvConfigs(getEnvOrDefault):1145 - Using default value for environment variable METRIC_CLIENT: '' 2024-07-09 05:05:24 INFO i.a.c.EnvConfigs(getEnvOrDefault):1145 - Using default value for environment variable METRIC_CLIENT: '' 2024-07-09 05:05:24 WARN i.a.m.l.MetricClientFactory(initialize):74 - MetricClient was not recognized or not provided. Accepted values are `datadog` or `otel`. 2024-07-09 05:05:24 replication-orchestrator > sourceLauncherConfig is: io.airbyte.persistence.job.models.IntegrationLauncherConfig@cf08c97[jobId=71651,attemptId=4,connectionId=9017f30a-1d76-4594-8ba3-a8d9c66b3051,workspaceId=68c1d0e2-4cc2-43bc-b5db-99b9f5cea75d,dockerImage=airbyte/source-mongodb-v2:1.4.0,normalizationDockerImage=,supportsDbt=false,normalizationIntegrationType=,protocolVersion=Version{version='0.2.0', major='0', minor='2', patch='0'},isCustomConnector=false,allowedHosts=io.airbyte.config.AllowedHosts@37c87fcc[hosts=[*.datadoghq.com, *.datadoghq.eu, *.sentry.io],additionalProperties={}],additionalEnvironmentVariables=,additionalLabels=,additionalProperties={}] 2024-07-09 05:05:24 replication-orchestrator > Attempt 0 to get the source definition for feature flag checks 2024-07-09 05:05:24 replication-orchestrator > Attempt 0 to get the source definition 2024-07-09 05:05:24 replication-orchestrator > Concurrent stream read enabled? false 2024-07-09 05:05:24 replication-orchestrator > Setting up source... 2024-07-09 05:05:24 replication-orchestrator > Using default value for environment variable STATE_STORAGE_S3_ACCESS_KEY: '' 2024-07-09 05:05:24 replication-orchestrator > Using default value for environment variable STATE_STORAGE_S3_SECRET_ACCESS_KEY: '' 2024-07-09 05:05:24 replication-orchestrator > Using default value for environment variable SIDECAR_KUBE_MEMORY_LIMIT: '50Mi' 2024-07-09 05:05:24 replication-orchestrator > Using default value for environment variable SIDECAR_MEMORY_REQUEST: '25Mi' 2024-07-09 05:05:24 replication-orchestrator > Using default value for environment variable SIDECAR_KUBE_CPU_LIMIT: '2.0' 2024-07-09 05:05:24 replication-orchestrator > Using default value for environment variable SIDECAR_KUBE_CPU_REQUEST: '0.1' 2024-07-09 05:05:24 replication-orchestrator > Using default value for environment variable SIDECAR_KUBE_MEMORY_LIMIT: '50Mi' 2024-07-09 05:05:24 replication-orchestrator > Using default value for environment variable SIDECAR_MEMORY_REQUEST: '25Mi' 2024-07-09 05:05:24 replication-orchestrator > Using default value for environment variable SIDECAR_KUBE_CPU_LIMIT: '2.0' 2024-07-09 05:05:24 replication-orchestrator > Using default value for environment variable SIDECAR_KUBE_CPU_REQUEST: '0.1' 2024-07-09 05:05:24 replication-orchestrator > Using default value for environment variable SIDECAR_KUBE_MEMORY_LIMIT: '50Mi' 2024-07-09 05:05:24 replication-orchestrator > Using default value for environment variable SIDECAR_MEMORY_REQUEST: '25Mi' 2024-07-09 05:05:24 replication-orchestrator > Setting up destination... 2024-07-09 05:05:24 replication-orchestrator > Setting up replication worker... 2024-07-09 05:05:24 replication-orchestrator > Running replication worker... 2024-07-09 05:05:24 replication-orchestrator > start sync worker. job id: 71651 attempt id: 4 2024-07-09 05:05:24 replication-orchestrator > 2024-07-09 05:05:24 replication-orchestrator > ----- START REPLICATION ----- 2024-07-09 05:05:24 replication-orchestrator > 2024-07-09 05:05:24 replication-orchestrator > Running destination... 2024-07-09 05:05:24 replication-orchestrator > Using default value for environment variable STATE_STORAGE_S3_ACCESS_KEY: '' 2024-07-09 05:05:24 replication-orchestrator > Using default value for environment variable STATE_STORAGE_S3_SECRET_ACCESS_KEY: '' 2024-07-09 05:05:24 replication-orchestrator > Using default value for environment variable SIDECAR_KUBE_CPU_LIMIT: '2.0' 2024-07-09 05:05:24 replication-orchestrator > Using default value for environment variable SIDECAR_KUBE_CPU_REQUEST: '0.1' 2024-07-09 05:05:24 replication-orchestrator > Using default value for environment variable STATE_STORAGE_S3_ACCESS_KEY: '' 2024-07-09 05:05:24 replication-orchestrator > Using default value for environment variable LAUNCHDARKLY_KEY: '' 2024-07-09 05:05:24 replication-orchestrator > Using default value for environment variable STATE_STORAGE_S3_SECRET_ACCESS_KEY: '' 2024-07-09 05:05:24 replication-orchestrator > Using default value for environment variable FEATURE_FLAG_CLIENT: '' 2024-07-09 05:05:24 replication-orchestrator > Using default value for environment variable SIDECAR_KUBE_CPU_LIMIT: '2.0' 2024-07-09 05:05:24 replication-orchestrator > Using default value for environment variable OTEL_COLLECTOR_ENDPOINT: '' 2024-07-09 05:05:24 replication-orchestrator > Using default value for environment variable SIDECAR_KUBE_CPU_REQUEST: '0.1' 2024-07-09 05:05:24 replication-orchestrator > Using default value for environment variable LAUNCHDARKLY_KEY: '' 2024-07-09 05:05:24 replication-orchestrator > Using default value for environment variable FEATURE_FLAG_CLIENT: '' 2024-07-09 05:05:24 replication-orchestrator > Using default value for environment variable OTEL_COLLECTOR_ENDPOINT: '' 2024-07-09 05:05:24 replication-orchestrator > Attempting to start pod = destination-snowflake-write-71651-4-jzrha for airbyte/destination-snowflake:3.10.1 with resources ConnectorResourceRequirements[main=io.airbyte.config.ResourceRequirements@797c2480[cpuRequest=2,cpuLimit=,memoryRequest=3Gi,memoryLimit=,additionalProperties={}], heartbeat=io.airbyte.config.ResourceRequirements@14bb614d[cpuRequest=0.05,cpuLimit=0.2,memoryRequest=25Mi,memoryLimit=50Mi,additionalProperties={}], stdErr=io.airbyte.config.ResourceRequirements@f9111e5[cpuRequest=0.01,cpuLimit=0.5,memoryRequest=25Mi,memoryLimit=50Mi,additionalProperties={}], stdIn=io.airbyte.config.ResourceRequirements@5874ed41[cpuRequest=0.5,cpuLimit=1,memoryRequest=25Mi,memoryLimit=50Mi,additionalProperties={}], stdOut=io.airbyte.config.ResourceRequirements@40b48aeb[cpuRequest=0.01,cpuLimit=0.5,memoryRequest=25Mi,memoryLimit=50Mi,additionalProperties={}]] and allowedHosts null 2024-07-09 05:05:24 replication-orchestrator > Attempting to start pod = source-mongodb-v2-read-71651-4-afsnd for airbyte/source-mongodb-v2:1.4.0 with resources ConnectorResourceRequirements[main=io.airbyte.config.ResourceRequirements@15e8be90[cpuRequest=2,cpuLimit=,memoryRequest=3Gi,memoryLimit=,additionalProperties={}], heartbeat=io.airbyte.config.ResourceRequirements@14bb614d[cpuRequest=0.05,cpuLimit=0.2,memoryRequest=25Mi,memoryLimit=50Mi,additionalProperties={}], stdErr=io.airbyte.config.ResourceRequirements@1c9e7908[cpuRequest=0.01,cpuLimit=0.5,memoryRequest=25Mi,memoryLimit=50Mi,additionalProperties={}], stdIn=null, stdOut=io.airbyte.config.ResourceRequirements@1dd05577[cpuRequest=0.5,cpuLimit=1,memoryRequest=25Mi,memoryLimit=50Mi,additionalProperties={}]] and allowedHosts io.airbyte.config.AllowedHosts@37c87fcc[hosts=[*.datadoghq.com, *.datadoghq.eu, *.sentry.io],additionalProperties={}] 2024-07-09 05:05:24 replication-orchestrator > source-mongodb-v2-read-71651-4-afsnd stdoutLocalPort = 9880 2024-07-09 05:05:24 replication-orchestrator > destination-snowflake-write-71651-4-jzrha stdoutLocalPort = 9877 2024-07-09 05:05:24 replication-orchestrator > source-mongodb-v2-read-71651-4-afsnd stderrLocalPort = 9879 2024-07-09 05:05:24 replication-orchestrator > destination-snowflake-write-71651-4-jzrha stderrLocalPort = 9878 2024-07-09 05:05:24 replication-orchestrator > Using default value for environment variable STATE_STORAGE_S3_ACCESS_KEY: '' 2024-07-09 05:05:24 replication-orchestrator > Using default value for environment variable STATE_STORAGE_S3_SECRET_ACCESS_KEY: '' 2024-07-09 05:05:24 replication-orchestrator > Using default value for environment variable SYNC_JOB_INIT_RETRY_TIMEOUT_MINUTES: '5' 2024-07-09 05:05:24 replication-orchestrator > Creating stdout socket server... 2024-07-09 05:05:24 replication-orchestrator > Creating stdout socket server... 2024-07-09 05:05:24 replication-orchestrator > Creating stderr socket server... 2024-07-09 05:05:24 replication-orchestrator > Creating stderr socket server... 2024-07-09 05:05:24 replication-orchestrator > Creating pod destination-snowflake-write-71651-4-jzrha... 2024-07-09 05:05:24 replication-orchestrator > Creating pod source-mongodb-v2-read-71651-4-afsnd... 2024-07-09 05:05:25 replication-orchestrator > Waiting for init container to be ready before copying files... 2024-07-09 05:05:25 replication-orchestrator > Waiting for init container to be ready before copying files... 2024-07-09 05:05:25 replication-orchestrator > Init container ready.. 2024-07-09 05:05:25 replication-orchestrator > Copying files... 2024-07-09 05:05:25 replication-orchestrator > Uploading file: input_state.json 2024-07-09 05:05:25 replication-orchestrator > kubectl cp /tmp/126fb976-19ff-4d69-9a28-8f2666f6619b/input_state.json airbyte/source-mongodb-v2-read-71651-4-afsnd:/config/input_state.json -c init 2024-07-09 05:05:25 replication-orchestrator > Waiting for kubectl cp to complete 2024-07-09 05:05:25 replication-orchestrator > Init container ready.. 2024-07-09 05:05:25 replication-orchestrator > Copying files... 2024-07-09 05:05:25 replication-orchestrator > Uploading file: destination_config.json 2024-07-09 05:05:25 replication-orchestrator > kubectl cp /tmp/90d85ddf-fd8c-4e86-a284-e1a81a4d1400/destination_config.json airbyte/destination-snowflake-write-71651-4-jzrha:/config/destination_config.json -c init 2024-07-09 05:05:25 replication-orchestrator > Waiting for kubectl cp to complete 2024-07-09 05:05:25 replication-orchestrator > kubectl cp complete, closing process 2024-07-09 05:05:25 replication-orchestrator > Uploading file: source_config.json 2024-07-09 05:05:25 replication-orchestrator > kubectl cp /tmp/1fa8c361-c673-4d6d-b403-6531c731dfe3/source_config.json airbyte/source-mongodb-v2-read-71651-4-afsnd:/config/source_config.json -c init 2024-07-09 05:05:25 replication-orchestrator > kubectl cp complete, closing process 2024-07-09 05:05:25 replication-orchestrator > Uploading file: destination_catalog.json 2024-07-09 05:05:25 replication-orchestrator > kubectl cp /tmp/b711126d-076e-4ee0-93d1-4b1fa2e5afd2/destination_catalog.json airbyte/destination-snowflake-write-71651-4-jzrha:/config/destination_catalog.json -c init 2024-07-09 05:05:25 replication-orchestrator > Waiting for kubectl cp to complete 2024-07-09 05:05:25 replication-orchestrator > Waiting for kubectl cp to complete 2024-07-09 05:05:26 replication-orchestrator > kubectl cp complete, closing process 2024-07-09 05:05:26 replication-orchestrator > kubectl cp complete, closing process 2024-07-09 05:05:26 replication-orchestrator > Uploading file: FINISHED_UPLOADING 2024-07-09 05:05:26 replication-orchestrator > Uploading file: source_catalog.json 2024-07-09 05:05:26 replication-orchestrator > kubectl cp /tmp/6fb51a35-91a7-4597-8ec2-04cc46507ca2/FINISHED_UPLOADING airbyte/destination-snowflake-write-71651-4-jzrha:/config/FINISHED_UPLOADING -c init 2024-07-09 05:05:26 replication-orchestrator > kubectl cp /tmp/0b9292f0-f341-43db-95f9-c6bd5e051b46/source_catalog.json airbyte/source-mongodb-v2-read-71651-4-afsnd:/config/source_catalog.json -c init 2024-07-09 05:05:26 replication-orchestrator > Waiting for kubectl cp to complete 2024-07-09 05:05:26 replication-orchestrator > Waiting for kubectl cp to complete 2024-07-09 05:05:26 replication-orchestrator > kubectl cp complete, closing process 2024-07-09 05:05:26 replication-orchestrator > Uploading file: FINISHED_UPLOADING 2024-07-09 05:05:26 replication-orchestrator > kubectl cp /tmp/afba468c-ad1e-4042-b764-a960b50261f2/FINISHED_UPLOADING airbyte/source-mongodb-v2-read-71651-4-afsnd:/config/FINISHED_UPLOADING -c init 2024-07-09 05:05:26 replication-orchestrator > kubectl cp complete, closing process 2024-07-09 05:05:26 replication-orchestrator > Waiting until pod is ready... 2024-07-09 05:05:26 replication-orchestrator > Waiting for kubectl cp to complete 2024-07-09 05:05:26 replication-orchestrator > kubectl cp complete, closing process 2024-07-09 05:05:26 replication-orchestrator > Waiting until pod is ready... 2024-07-09 05:05:27 replication-orchestrator > Setting stdout... 2024-07-09 05:05:27 replication-orchestrator > Setting stdout... 2024-07-09 05:05:27 replication-orchestrator > Setting stderr... 2024-07-09 05:05:27 replication-orchestrator > Setting stderr... 2024-07-09 05:05:27 replication-orchestrator > Reading pod IP... 2024-07-09 05:05:27 replication-orchestrator > Pod IP: 192.168.69.39 2024-07-09 05:05:27 replication-orchestrator > Using null stdin output stream... 2024-07-09 05:05:27 replication-orchestrator > Reading messages from protocol version 0.2.0 2024-07-09 05:05:27 replication-orchestrator > Reading pod IP... 2024-07-09 05:05:27 replication-orchestrator > Pod IP: 192.168.78.158 2024-07-09 05:05:27 replication-orchestrator > Creating stdin socket... 2024-07-09 05:05:27 replication-orchestrator > Writing messages to protocol version 0.2.0 2024-07-09 05:05:27 replication-orchestrator > Reading messages from protocol version 0.2.0 2024-07-09 05:05:27 replication-orchestrator > Writing async status RUNNING for KubePodInfo[namespace=airbyte, name=orchestrator-repl-job-71651-attempt-4, mainContainerInfo=KubeContainerInfo[image=airbyte/container-orchestrator:0.50.38, pullPolicy=IfNotPresent]]... 2024-07-09 05:05:27 replication-orchestrator > readFromSource: start 2024-07-09 05:05:27 replication-orchestrator > Starting source heartbeat check. Will check every 1 minutes. 2024-07-09 05:05:27 replication-orchestrator > processMessage: start 2024-07-09 05:05:27 replication-orchestrator > writeToDestination: start 2024-07-09 05:05:27 replication-orchestrator > readFromDestination: start 2024-07-09 05:05:27 source > INFO main i.a.i.s.m.MongoDbSource(main):51 starting source: class io.airbyte.integrations.source.mongodb.MongoDbSource 2024-07-09 05:05:27 destination > INFO main i.a.c.i.b.a.AdaptiveDestinationRunner$Runner(getDestination):54 Running destination under deployment mode: OSS 2024-07-09 05:05:27 destination > INFO main i.a.c.i.b.a.AdaptiveDestinationRunner$Runner(run):67 Starting destination: io.airbyte.integrations.destination.snowflake.SnowflakeDestination 2024-07-09 05:05:27 source > INFO main i.a.c.i.b.IntegrationCliParser$Companion(parseOptions):144 integration args: {read=null, catalog=source_catalog.json, state=input_state.json, config=source_config.json} 2024-07-09 05:05:27 source > INFO main i.a.c.i.b.IntegrationRunner(runInternal):124 Running integration: io.airbyte.integrations.source.mongodb.MongoDbSource 2024-07-09 05:05:27 source > INFO main i.a.c.i.b.IntegrationRunner(runInternal):125 Command: READ 2024-07-09 05:05:27 source > INFO main i.a.c.i.b.IntegrationRunner(runInternal):126 Integration config: IntegrationConfig{command=READ, configPath='source_config.json', catalogPath='source_catalog.json', statePath='input_state.json'} 2024-07-09 05:05:28 destination > INFO main i.a.c.i.b.IntegrationCliParser$Companion(parseOptions):144 integration args: {catalog=destination_catalog.json, write=null, config=destination_config.json} 2024-07-09 05:05:28 destination > INFO main i.a.c.i.b.IntegrationRunner(runInternal):124 Running integration: io.airbyte.integrations.destination.snowflake.SnowflakeDestination 2024-07-09 05:05:28 destination > INFO main i.a.c.i.b.IntegrationRunner(runInternal):125 Command: WRITE 2024-07-09 05:05:28 source > WARN main c.n.s.JsonMetaSchema(newValidator):278 Unknown keyword groups - you should define your own Meta Schema. If the keyword is irrelevant for validation, just use a NonValidationKeyword 2024-07-09 05:05:28 destination > INFO main i.a.c.i.b.IntegrationRunner(runInternal):126 Integration config: IntegrationConfig{command=WRITE, configPath='destination_config.json', catalogPath='destination_catalog.json', statePath='null'} 2024-07-09 05:05:28 source > WARN main c.n.s.JsonMetaSchema(newValidator):278 Unknown keyword order - you should define your own Meta Schema. If the keyword is irrelevant for validation, just use a NonValidationKeyword 2024-07-09 05:05:28 source > WARN main c.n.s.JsonMetaSchema(newValidator):278 Unknown keyword group - you should define your own Meta Schema. If the keyword is irrelevant for validation, just use a NonValidationKeyword 2024-07-09 05:05:28 source > WARN main c.n.s.JsonMetaSchema(newValidator):278 Unknown keyword display_type - you should define your own Meta Schema. If the keyword is irrelevant for validation, just use a NonValidationKeyword 2024-07-09 05:05:28 source > WARN main c.n.s.JsonMetaSchema(newValidator):278 Unknown keyword airbyte_secret - you should define your own Meta Schema. If the keyword is irrelevant for validation, just use a NonValidationKeyword 2024-07-09 05:05:28 source > WARN main c.n.s.JsonMetaSchema(newValidator):278 Unknown keyword min - you should define your own Meta Schema. If the keyword is irrelevant for validation, just use a NonValidationKeyword 2024-07-09 05:05:28 source > WARN main c.n.s.JsonMetaSchema(newValidator):278 Unknown keyword max - you should define your own Meta Schema. If the keyword is irrelevant for validation, just use a NonValidationKeyword 2024-07-09 05:05:28 source > INFO main i.a.i.s.m.s.MongoDbStateManager(createStateManager):74 Initial state [{"type":"GLOBAL","global":{"shared_state":{"state":{"[\"finaloop-prod\",{\"server_id\":\"finaloop-prod\"}]":"{\"sec\":1720489646,\"ord\":261,\"resume_token\":\"82668C96AE000001052B022C0100296E5A100414EB565DFAA44C2B9609085790ED971A46645F69640064668C96AEBCF2CBEDF3BA05160004\"}"},"schema_enforced":true},"stream_states":[{"stream_descriptor":{"name":"amazonorders","namespace":"finaloop-prod"},"stream_state":{"id":"668c3987bcf2cbedf3c228e8","idType":"OBJECT_ID","status":"COMPLETE"}}]}}] 2024-07-09 05:05:28 destination > WARN main c.n.s.JsonMetaSchema(newValidator):278 Unknown keyword pattern_descriptor - you should define your own Meta Schema. If the keyword is irrelevant for validation, just use a NonValidationKeyword 2024-07-09 05:05:28 destination > WARN main c.n.s.JsonMetaSchema(newValidator):278 Unknown keyword order - you should define your own Meta Schema. If the keyword is irrelevant for validation, just use a NonValidationKeyword 2024-07-09 05:05:28 destination > WARN main c.n.s.JsonMetaSchema(newValidator):278 Unknown keyword airbyte_secret - you should define your own Meta Schema. If the keyword is irrelevant for validation, just use a NonValidationKeyword 2024-07-09 05:05:28 destination > WARN main c.n.s.JsonMetaSchema(newValidator):278 Unknown keyword airbyte_hidden - you should define your own Meta Schema. If the keyword is irrelevant for validation, just use a NonValidationKeyword 2024-07-09 05:05:28 source > INFO cluster-ClusterId{value='668cc5184ad176664dffe55f', description='null'}-srv-finaloop-prod-pl-0.xnbzo.mongodb.net c.m.i.d.l.SLF4JLogger(info):71 Adding discovered server pl-0-us-east-1.xnbzo.mongodb.net:1026 to client view of cluster 2024-07-09 05:05:28 source > INFO main c.m.i.d.l.SLF4JLogger(info):71 MongoClient with metadata {"driver": {"name": "mongo-java-driver|sync|Airbyte", "version": "4.11.0"}, "os": {"type": "Linux", "name": "Linux", "architecture": "amd64", "version": "5.10.217-205.860.amzn2.x86_64"}, "platform": "Java/Amazon.com Inc./21.0.3+9-LTS"} created with settings MongoClientSettings{readPreference=ReadPreference{name=secondaryPreferred, hedgeOptions=null}, writeConcern=WriteConcern{w=null, wTimeout=null ms, journal=null}, retryWrites=true, retryReads=true, readConcern=ReadConcern{level=null}, credential=MongoCredential{mechanism=null, userName='AIRBYTE_USER', source='admin', password=, mechanismProperties=}, transportSettings=null, streamFactoryFactory=null, commandListeners=[], codecRegistry=ProvidersCodecRegistry{codecProviders=[ValueCodecProvider{}, BsonValueCodecProvider{}, DBRefCodecProvider{}, DBObjectCodecProvider{}, DocumentCodecProvider{}, CollectionCodecProvider{}, IterableCodecProvider{}, MapCodecProvider{}, GeoJsonCodecProvider{}, GridFSFileCodecProvider{}, Jsr310CodecProvider{}, JsonObjectCodecProvider{}, BsonCodecProvider{}, EnumCodecProvider{}, com.mongodb.client.model.mql.ExpressionCodecProvider@7c2327fa, com.mongodb.Jep395RecordCodecProvider@4d847d32, com.mongodb.KotlinCodecProvider@5f462e3b]}, loggerSettings=LoggerSettings{maxDocumentLength=1000}, clusterSettings={hosts=[127.0.0.1:27017], srvHost=finaloop-prod-pl-0.xnbzo.mongodb.net, srvServiceName=mongodb, mode=MULTIPLE, requiredClusterType=REPLICA_SET, requiredReplicaSetName='atlas-elry3t-shard-0', serverSelector='null', clusterListeners='[]', serverSelectionTimeout='30000 ms', localThreshold='15 ms'}, socketSettings=SocketSettings{connectTimeoutMS=10000, readTimeoutMS=0, receiveBufferSize=0, proxySettings=ProxySettings{host=null, port=null, username=null, password=null}}, heartbeatSocketSettings=SocketSettings{connectTimeoutMS=10000, readTimeoutMS=10000, receiveBufferSize=0, proxySettings=ProxySettings{host=null, port=null, username=null, password=null}}, connectionPoolSettings=ConnectionPoolSettings{maxSize=100, minSize=0, maxWaitTimeMS=120000, maxConnectionLifeTimeMS=0, maxConnectionIdleTimeMS=0, maintenanceInitialDelayMS=0, maintenanceFrequencyMS=60000, connectionPoolListeners=[], maxConnecting=2}, serverSettings=ServerSettings{heartbeatFrequencyMS=10000, minHeartbeatFrequencyMS=500, serverListeners='[]', serverMonitorListeners='[]'}, sslSettings=SslSettings{enabled=true, invalidHostNameAllowed=false, context=null}, applicationName='null', compressorList=[], uuidRepresentation=UNSPECIFIED, serverApi=null, autoEncryptionSettings=null, dnsClient=null, inetAddressResolver=null, contextProvider=null} 2024-07-09 05:05:28 source > INFO main i.a.i.s.m.MongoDbSource(read):153 There are 1 Incremental streams 2024-07-09 05:05:28 source > INFO main i.a.i.s.m.c.MongoDbCdcInitializer(createCdcIterators):93 Subsequent cdc record wait time: PT5M seconds 2024-07-09 05:05:28 source > INFO cluster-ClusterId{value='668cc5184ad176664dffe55f', description='null'}-srv-finaloop-prod-pl-0.xnbzo.mongodb.net c.m.i.d.l.SLF4JLogger(info):71 Adding discovered server pl-0-us-east-1.xnbzo.mongodb.net:1025 to client view of cluster 2024-07-09 05:05:28 source > INFO cluster-ClusterId{value='668cc5184ad176664dffe55f', description='null'}-srv-finaloop-prod-pl-0.xnbzo.mongodb.net c.m.i.d.l.SLF4JLogger(info):71 Adding discovered server pl-0-us-east-1.xnbzo.mongodb.net:1024 to client view of cluster 2024-07-09 05:05:28 source > INFO main c.m.i.d.l.SLF4JLogger(info):71 No server chosen by ReadPreferenceServerSelector{readPreference=primary} from cluster description ClusterDescription{type=REPLICA_SET, connectionMode=MULTIPLE, serverDescriptions=[ServerDescription{address=pl-0-us-east-1.xnbzo.mongodb.net:1026, type=UNKNOWN, state=CONNECTING}, ServerDescription{address=pl-0-us-east-1.xnbzo.mongodb.net:1025, type=UNKNOWN, state=CONNECTING}, ServerDescription{address=pl-0-us-east-1.xnbzo.mongodb.net:1024, type=UNKNOWN, state=CONNECTING}]}. Waiting for 30000 ms before timing out 2024-07-09 05:05:28 source > INFO cluster-ClusterId{value='668cc5184ad176664dffe55f', description='null'}-pl-0-us-east-1.xnbzo.mongodb.net:1024 c.m.i.d.l.SLF4JLogger(info):71 Monitor thread successfully connected to server with description ServerDescription{address=pl-0-us-east-1.xnbzo.mongodb.net:1024, type=REPLICA_SET_SECONDARY, state=CONNECTED, ok=true, minWireVersion=0, maxWireVersion=17, maxDocumentSize=16777216, logicalSessionTimeoutMinutes=30, roundTripTimeNanos=187601119, setName='atlas-elry3t-shard-0', canonicalAddress=pl-0-us-east-1.xnbzo.mongodb.net:1024, hosts=[pl-0-us-east-1.xnbzo.mongodb.net:1025, pl-0-us-east-1.xnbzo.mongodb.net:1026, pl-0-us-east-1.xnbzo.mongodb.net:1024], passives=[], arbiters=[], primary='pl-0-us-east-1.xnbzo.mongodb.net:1025', tagSet=TagSet{[Tag{name='availabilityZone', value='use1-az6'}, Tag{name='diskState', value='READY'}, Tag{name='nodeType', value='ELECTABLE'}, Tag{name='provider', value='AWS'}, Tag{name='region', value='US_EAST_1'}, Tag{name='workloadType', value='OPERATIONAL'}]}, electionId=null, setVersion=4, topologyVersion=TopologyVersion{processId=668015533bce4fadbd70ae1c, counter=4}, lastWriteDate=Tue Jul 09 05:05:28 UTC 2024, lastUpdateTimeNanos=2789088513347845} 2024-07-09 05:05:28 source > INFO cluster-ClusterId{value='668cc5184ad176664dffe55f', description='null'}-pl-0-us-east-1.xnbzo.mongodb.net:1025 c.m.i.d.l.SLF4JLogger(info):71 Monitor thread successfully connected to server with description ServerDescription{address=pl-0-us-east-1.xnbzo.mongodb.net:1025, type=REPLICA_SET_PRIMARY, state=CONNECTED, ok=true, minWireVersion=0, maxWireVersion=17, maxDocumentSize=16777216, logicalSessionTimeoutMinutes=30, roundTripTimeNanos=189059644, setName='atlas-elry3t-shard-0', canonicalAddress=pl-0-us-east-1.xnbzo.mongodb.net:1025, hosts=[pl-0-us-east-1.xnbzo.mongodb.net:1025, pl-0-us-east-1.xnbzo.mongodb.net:1026, pl-0-us-east-1.xnbzo.mongodb.net:1024], passives=[], arbiters=[], primary='pl-0-us-east-1.xnbzo.mongodb.net:1025', tagSet=TagSet{[Tag{name='availabilityZone', value='use1-az4'}, Tag{name='diskState', value='READY'}, Tag{name='nodeType', value='ELECTABLE'}, Tag{name='provider', value='AWS'}, Tag{name='region', value='US_EAST_1'}, Tag{name='workloadType', value='OPERATIONAL'}]}, electionId=7fffffff0000000000000070, setVersion=4, topologyVersion=TopologyVersion{processId=66801612bcf2cbedf3003b29, counter=6}, lastWriteDate=Tue Jul 09 05:05:28 UTC 2024, lastUpdateTimeNanos=2789088513353015} 2024-07-09 05:05:28 source > INFO cluster-ClusterId{value='668cc5184ad176664dffe55f', description='null'}-pl-0-us-east-1.xnbzo.mongodb.net:1026 c.m.i.d.l.SLF4JLogger(info):71 Monitor thread successfully connected to server with description ServerDescription{address=pl-0-us-east-1.xnbzo.mongodb.net:1026, type=REPLICA_SET_SECONDARY, state=CONNECTED, ok=true, minWireVersion=0, maxWireVersion=17, maxDocumentSize=16777216, logicalSessionTimeoutMinutes=30, roundTripTimeNanos=189039673, setName='atlas-elry3t-shard-0', canonicalAddress=pl-0-us-east-1.xnbzo.mongodb.net:1026, hosts=[pl-0-us-east-1.xnbzo.mongodb.net:1025, pl-0-us-east-1.xnbzo.mongodb.net:1026, pl-0-us-east-1.xnbzo.mongodb.net:1024], passives=[], arbiters=[], primary='pl-0-us-east-1.xnbzo.mongodb.net:1025', tagSet=TagSet{[Tag{name='availabilityZone', value='use1-az5'}, Tag{name='diskState', value='READY'}, Tag{name='nodeType', value='ELECTABLE'}, Tag{name='provider', value='AWS'}, Tag{name='region', value='US_EAST_1'}, Tag{name='workloadType', value='OPERATIONAL'}]}, electionId=null, setVersion=4, topologyVersion=TopologyVersion{processId=668016ba8d3144b6ca265470, counter=3}, lastWriteDate=Tue Jul 09 05:05:28 UTC 2024, lastUpdateTimeNanos=2789088513336384} 2024-07-09 05:05:28 source > INFO cluster-ClusterId{value='668cc5184ad176664dffe55f', description='null'}-pl-0-us-east-1.xnbzo.mongodb.net:1025 c.m.i.d.l.SLF4JLogger(info):71 Discovered replica set primary pl-0-us-east-1.xnbzo.mongodb.net:1025 with max election id 7fffffff0000000000000070 and max set version 4 2024-07-09 05:05:28 source > INFO main i.a.i.s.m.c.MongoDbCdcInitializer(logOplogInfo):193 Max oplog size is 5020963072 bytes 2024-07-09 05:05:28 source > INFO main i.a.i.s.m.c.MongoDbCdcInitializer(logOplogInfo):194 Free space in oplog is 24422682624 bytes 2024-07-09 05:05:28 source > INFO main i.a.i.s.m.c.MongoDbResumeTokenHelper(getMostRecentResumeToken):45 Resume token for db finaloop-prod with collection filter [amazonorders] 2024-07-09 05:05:28 destination > INFO main i.a.i.b.d.t.CatalogParser(parseCatalog):118 Running sync with stream configs: [StreamConfig(id=StreamId(finalNamespace=STAGING, finalName=MONGO_AMAZONORDERS, rawNamespace=airbyte_internal, rawName=STAGING_raw__stream_mongo_amazonorders, originalNamespace=STAGING, originalName=mongo_amazonorders), destinationSyncMode=append_dedup, primaryKey=[ColumnId(name=_ID, originalName=_id, canonicalName=_ID)], cursor=Optional[ColumnId(name=_AB_CDC_CURSOR, originalName=_ab_cdc_cursor, canonicalName=_AB_CDC_CURSOR)], columns={ColumnId(name=_ID, originalName=_id, canonicalName=_ID)=STRING, ColumnId(name=ISISPU, originalName=IsISPU, canonicalName=ISISPU)=STRING, ColumnId(name=ISPRIME, originalName=IsPrime, canonicalName=ISPRIME)=STRING, ColumnId(name=BUYERINFO, originalName=BuyerInfo, canonicalName=BUYERINFO)=Struct(properties={}), ColumnId(name=ORDERTYPE, originalName=OrderType, canonicalName=ORDERTYPE)=STRING, ColumnId(name=COMPANYID, originalName=companyId, canonicalName=COMPANYID)=STRING, ColumnId(name=CREATEDAT, originalName=createdAt, canonicalName=CREATEDAT)=STRING, ColumnId(name=UPDATEDAT, originalName=updatedAt, canonicalName=UPDATEDAT)=STRING, ColumnId(name=ORDERTOTAL, originalName=OrderTotal, canonicalName=ORDERTOTAL)=Struct(properties={}), ColumnId(name=ORDERSTATUS, originalName=OrderStatus, canonicalName=ORDERSTATUS)=STRING, ColumnId(name=PURCHASEDATE, originalName=PurchaseDate, canonicalName=PURCHASEDATE)=STRING, ColumnId(name=SALESCHANNEL, originalName=SalesChannel, canonicalName=SALESCHANNEL)=STRING, ColumnId(name=AMAZONORDERID, originalName=AmazonOrderId, canonicalName=AMAZONORDERID)=STRING, ColumnId(name=MARKETPLACEID, originalName=MarketplaceId, canonicalName=MARKETPLACEID)=STRING, ColumnId(name=PAYMENTMETHOD, originalName=PaymentMethod, canonicalName=PAYMENTMETHOD)=STRING, ColumnId(name=SELLERORDERID, originalName=SellerOrderId, canonicalName=SELLERORDERID)=STRING, ColumnId(name=ISPREMIUMORDER, originalName=IsPremiumOrder, canonicalName=ISPREMIUMORDER)=STRING, ColumnId(name=LASTUPDATEDATE, originalName=LastUpdateDate, canonicalName=LASTUPDATEDATE)=STRING, ColumnId(name=LATESTSHIPDATE, originalName=LatestShipDate, canonicalName=LATESTSHIPDATE)=STRING, ColumnId(name=_AB_CDC_CURSOR, originalName=_ab_cdc_cursor, canonicalName=_AB_CDC_CURSOR)=INTEGER, ColumnId(name=RESTOCKEDITEMS, originalName=restockedItems, canonicalName=RESTOCKEDITEMS)=Array(items=UNKNOWN), ColumnId(name=ISBUSINESSORDER, originalName=IsBusinessOrder, canonicalName=ISBUSINESSORDER)=STRING, ColumnId(name=REPLACEDORDERID, originalName=ReplacedOrderId, canonicalName=REPLACEDORDERID)=STRING, ColumnId(name=EARLIESTSHIPDATE, originalName=EarliestShipDate, canonicalName=EARLIESTSHIPDATE)=STRING, ColumnId(name=SHIPSERVICELEVEL, originalName=ShipServiceLevel, canonicalName=SHIPSERVICELEVEL)=STRING, ColumnId(name=HASREGULATEDITEMS, originalName=HasRegulatedItems, canonicalName=HASREGULATEDITEMS)=STRING, ColumnId(name=FULFILLMENTCHANNEL, originalName=FulfillmentChannel, canonicalName=FULFILLMENTCHANNEL)=STRING, ColumnId(name=ISREPLACEMENTORDER, originalName=IsReplacementOrder, canonicalName=ISREPLACEMENTORDER)=STRING, ColumnId(name=LATESTDELIVERYDATE, originalName=LatestDeliveryDate, canonicalName=LATESTDELIVERYDATE)=STRING, ColumnId(name=_AB_CDC_DELETED_AT, originalName=_ab_cdc_deleted_at, canonicalName=_AB_CDC_DELETED_AT)=STRING, ColumnId(name=_AB_CDC_UPDATED_AT, originalName=_ab_cdc_updated_at, canonicalName=_AB_CDC_UPDATED_AT)=STRING, ColumnId(name=EARLIESTDELIVERYDATE, originalName=EarliestDeliveryDate, canonicalName=EARLIESTDELIVERYDATE)=STRING, ColumnId(name=NUMBEROFITEMSSHIPPED, originalName=NumberOfItemsShipped, canonicalName=NUMBEROFITEMSSHIPPED)=NUMBER, ColumnId(name=PAYMENTMETHODDETAILS, originalName=PaymentMethodDetails, canonicalName=PAYMENTMETHODDETAILS)=Array(items=UNKNOWN), ColumnId(name=INTEGRATIONACCOUNTID, originalName=integrationAccountId, canonicalName=INTEGRATIONACCOUNTID)=STRING, ColumnId(name=ISGLOBALEXPRESSENABLED, originalName=IsGlobalExpressEnabled, canonicalName=ISGLOBALEXPRESSENABLED)=STRING, ColumnId(name=NUMBEROFITEMSUNSHIPPED, originalName=NumberOfItemsUnshipped, canonicalName=NUMBEROFITEMSUNSHIPPED)=NUMBER, ColumnId(name=SHIPMENTSERVICELEVELCATEGORY, originalName=ShipmentServiceLevelCategory, canonicalName=SHIPMENTSERVICELEVELCATEGORY)=STRING}, generationId=0, minimumGenerationId=0, syncId=0)] 2024-07-09 05:05:28 destination > INFO main i.a.i.b.d.o.DefaultSyncOperation(createPerStreamOpClients):52 Preparing required schemas and tables for all streams 2024-07-09 05:05:28 destination > INFO main c.z.h.HikariDataSource(getConnection):109 HikariPool-1 - Starting... 2024-07-09 05:05:29 destination > INFO main c.z.h.p.HikariPool(checkFailFast):554 HikariPool-1 - Added connection net.snowflake.client.jdbc.SnowflakeConnectionV1@2a7d9b41 2024-07-09 05:05:29 destination > INFO main c.z.h.HikariDataSource(getConnection):122 HikariPool-1 - Start completed. 2024-07-09 05:05:29 source > INFO main i.a.i.s.m.c.MongoDbDebeziumStateUtil(constructInitialDebeziumState):62 Initial resume token '82668CC518000000022B0229296E04' constructed, corresponding to timestamp (seconds after epoch) 1720501528 2024-07-09 05:05:29 source > INFO main i.a.i.s.m.c.MongoDbDebeziumStateUtil(constructInitialDebeziumState):65 Initial Debezium state constructed: {"[\"finaloop-prod\",{\"server_id\":\"finaloop-prod\"}]":"{\"sec\":1720501528,\"ord\":2,\"resume_token\":\"82668CC518000000022B0229296E04\"}"} 2024-07-09 05:05:29 source > INFO main i.a.i.s.m.c.MongoDbDebeziumStateUtil(savedOffset):151 properties: {connector.class=io.debezium.connector.mongodb.MongoDbConnector, max.queue.size=8192, collection.include.list=finaloop-prod\.amazonorders, mongodb.connection.mode=sharded, mongodb.connection.string=mongodb+srv://finaloop-prod-pl-0.xnbzo.mongodb.net/, mongodb.password=aBom2PUwObFGVymF, capture.mode=change_streams_update_full_with_pre_image, tombstones.on.delete=false, mongodb.ssl.enabled=true, value.converter.replace.null.with.default=false, topic.prefix=finaloop-prod, offset.storage.file.filename=/tmp/cdc-state-offset7061437319338417755/offset.dat, decimal.handling.mode=string, capture.scope=database, mongodb.authsource=admin, errors.retry.delay.initial.ms=299, offset.storage=org.apache.kafka.connect.storage.FileOffsetBackingStore, max.queue.size.in.bytes=268435456, mongodb.user=AIRBYTE_USER, errors.retry.delay.max.ms=300, heartbeat.interval.ms=10000, offset.flush.interval.ms=1000, key.converter.schemas.enable=false, errors.max.retries=0, name=finaloop-prod, value.converter.schemas.enable=false, capture.target=finaloop-prod, max.batch.size=2048, snapshot.mode=never, database.include.list=finaloop-prod} 2024-07-09 05:05:29 source > INFO main o.a.k.c.c.AbstractConfig(logAll):370 JsonConverterConfig values: converter.type = key decimal.format = BASE64 replace.null.with.default = true schemas.cache.size = 1000 schemas.enable = false 2024-07-09 05:05:29 source > INFO main o.a.k.c.c.AbstractConfig(logAll):370 StandaloneConfig values: access.control.allow.methods = access.control.allow.origin = admin.listeners = null auto.include.jmx.reporter = true bootstrap.servers = [localhost:9092] client.dns.lookup = use_all_dns_ips config.providers = [] connector.client.config.override.policy = All header.converter = class org.apache.kafka.connect.storage.SimpleHeaderConverter key.converter = class org.apache.kafka.connect.json.JsonConverter listeners = [http://:8083] metric.reporters = [] metrics.num.samples = 2 metrics.recording.level = INFO metrics.sample.window.ms = 30000 offset.flush.interval.ms = 1000 offset.flush.timeout.ms = 5000 offset.storage.file.filename = /tmp/cdc-state-offset7061437319338417755/offset.dat plugin.discovery = hybrid_warn plugin.path = null response.http.headers.config = rest.advertised.host.name = null rest.advertised.listener = null rest.advertised.port = null rest.extension.classes = [] ssl.cipher.suites = null ssl.client.auth = none ssl.enabled.protocols = [TLSv1.2, TLSv1.3] ssl.endpoint.identification.algorithm = https ssl.engine.factory.class = null ssl.key.password = null ssl.keymanager.algorithm = SunX509 ssl.keystore.certificate.chain = null ssl.keystore.key = null ssl.keystore.location = null ssl.keystore.password = null ssl.keystore.type = JKS ssl.protocol = TLSv1.3 ssl.provider = null ssl.secure.random.implementation = null ssl.trustmanager.algorithm = PKIX ssl.truststore.certificates = null ssl.truststore.location = null ssl.truststore.password = null ssl.truststore.type = JKS task.shutdown.graceful.timeout.ms = 5000 topic.creation.enable = true topic.tracking.allow.reset = true topic.tracking.enable = true value.converter = class org.apache.kafka.connect.json.JsonConverter 2024-07-09 05:05:29 source > INFO main o.a.k.c.s.FileOffsetBackingStore(start):63 Starting FileOffsetBackingStore with file /tmp/cdc-state-offset7061437319338417755/offset.dat 2024-07-09 05:05:29 source > INFO main o.a.k.c.c.AbstractConfig(logAll):370 JsonConverterConfig values: converter.type = key decimal.format = BASE64 replace.null.with.default = true schemas.cache.size = 1000 schemas.enable = false 2024-07-09 05:05:29 source > INFO main o.a.k.c.c.AbstractConfig(logAll):370 JsonConverterConfig values: converter.type = value decimal.format = BASE64 replace.null.with.default = true schemas.cache.size = 1000 schemas.enable = false 2024-07-09 05:05:29 source > INFO main i.d.c.CommonConnectorConfig(getSourceInfoStructMaker):1649 Loading the custom source info struct maker plugin: io.debezium.connector.mongodb.MongoDbSourceInfoStructMaker 2024-07-09 05:05:29 source > INFO main i.a.i.s.m.c.MongoDbDebeziumStateUtil(parseSavedOffset):200 Closing offsetStorageReader and fileOffsetBackingStore 2024-07-09 05:05:29 source > INFO main o.a.k.c.s.FileOffsetBackingStore(stop):71 Stopped FileOffsetBackingStore 2024-07-09 05:05:30 destination > INFO main i.a.c.d.j.DefaultJdbcDatabase(unsafeQuery$lambda$6):126 closing connection 2024-07-09 05:05:31 destination > INFO main i.a.c.d.j.DefaultJdbcDatabase(unsafeQuery$lambda$6):126 closing connection 2024-07-09 05:05:32 destination > INFO main i.a.c.d.j.DefaultJdbcDatabase(unsafeQuery$lambda$6):126 closing connection 2024-07-09 05:05:32 destination > INFO main i.a.i.d.s.t.SnowflakeDestinationHandler(getInitialRawTableState$lambda$3):92 Retrieving table from Db metadata: airbyte_internal STAGING_raw__stream_mongo_amazonorders 2024-07-09 05:05:32 destination > INFO sync-operations-1 i.a.i.b.d.t.TyperDeduperUtil(runMigrationsAsync$lambda$12):165 Maybe executing SnowflakeDV2Migration migration for stream STAGING.mongo_amazonorders. 2024-07-09 05:05:32 destination > INFO sync-operations-1 i.a.i.d.s.m.SnowflakeDV2Migration(migrateIfNecessary):32 Initializing DV2 Migration check 2024-07-09 05:05:32 destination > INFO sync-operations-1 i.a.i.b.d.t.BaseDestinationV1V2Migrator(migrateIfNecessary):20 Assessing whether migration is necessary for stream MONGO_AMAZONORDERS 2024-07-09 05:05:32 destination > INFO sync-operations-1 i.a.i.b.d.t.BaseDestinationV1V2Migrator(shouldMigrate):44 Checking whether v1 raw table _airbyte_raw_mongo_amazonorders in dataset STAGING exists 2024-07-09 05:05:33 destination > INFO sync-operations-1 i.a.c.d.j.DefaultJdbcDatabase(unsafeQuery$lambda$6):126 closing connection 2024-07-09 05:05:33 destination > INFO sync-operations-1 i.a.c.d.j.DefaultJdbcDatabase(unsafeQuery$lambda$6):126 closing connection 2024-07-09 05:05:34 destination > INFO sync-operations-1 i.a.c.d.j.DefaultJdbcDatabase(unsafeQuery$lambda$6):126 closing connection 2024-07-09 05:05:34 destination > INFO sync-operations-1 i.a.i.b.d.t.BaseDestinationV1V2Migrator(shouldMigrate):52 Migration Info: Required for Sync mode: true, No existing v2 raw tables: false, A v1 raw table exists: false 2024-07-09 05:05:34 destination > INFO sync-operations-1 i.a.i.b.d.t.BaseDestinationV1V2Migrator(migrateIfNecessary):31 No Migration Required for stream: MONGO_AMAZONORDERS 2024-07-09 05:05:34 destination > INFO main i.a.i.b.d.t.TyperDeduperUtil(executeRawTableMigrations):66 Refetching initial state for streams: [StreamId(finalNamespace=STAGING, finalName=MONGO_AMAZONORDERS, rawNamespace=airbyte_internal, rawName=STAGING_raw__stream_mongo_amazonorders, originalNamespace=STAGING, originalName=mongo_amazonorders)] 2024-07-09 05:05:35 destination > INFO main i.a.c.d.j.DefaultJdbcDatabase(unsafeQuery$lambda$6):126 closing connection 2024-07-09 05:05:36 destination > INFO main i.a.c.d.j.DefaultJdbcDatabase(unsafeQuery$lambda$6):126 closing connection 2024-07-09 05:05:37 destination > INFO main i.a.c.d.j.DefaultJdbcDatabase(unsafeQuery$lambda$6):126 closing connection 2024-07-09 05:05:37 destination > INFO main i.a.i.d.s.t.SnowflakeDestinationHandler(getInitialRawTableState$lambda$3):92 Retrieving table from Db metadata: airbyte_internal STAGING_raw__stream_mongo_amazonorders 2024-07-09 05:05:37 destination > INFO main i.a.i.b.d.t.TyperDeduperUtil(executeRawTableMigrations):73 Updated states: [DestinationInitialStatus(streamConfig=StreamConfig(id=StreamId(finalNamespace=STAGING, finalName=MONGO_AMAZONORDERS, rawNamespace=airbyte_internal, rawName=STAGING_raw__stream_mongo_amazonorders, originalNamespace=STAGING, originalName=mongo_amazonorders), destinationSyncMode=append_dedup, primaryKey=[ColumnId(name=_ID, originalName=_id, canonicalName=_ID)], cursor=Optional[ColumnId(name=_AB_CDC_CURSOR, originalName=_ab_cdc_cursor, canonicalName=_AB_CDC_CURSOR)], columns={ColumnId(name=_ID, originalName=_id, canonicalName=_ID)=STRING, ColumnId(name=ISISPU, originalName=IsISPU, canonicalName=ISISPU)=STRING, ColumnId(name=ISPRIME, originalName=IsPrime, canonicalName=ISPRIME)=STRING, ColumnId(name=BUYERINFO, originalName=BuyerInfo, canonicalName=BUYERINFO)=Struct(properties={}), ColumnId(name=ORDERTYPE, originalName=OrderType, canonicalName=ORDERTYPE)=STRING, ColumnId(name=COMPANYID, originalName=companyId, canonicalName=COMPANYID)=STRING, ColumnId(name=CREATEDAT, originalName=createdAt, canonicalName=CREATEDAT)=STRING, ColumnId(name=UPDATEDAT, originalName=updatedAt, canonicalName=UPDATEDAT)=STRING, ColumnId(name=ORDERTOTAL, originalName=OrderTotal, canonicalName=ORDERTOTAL)=Struct(properties={}), ColumnId(name=ORDERSTATUS, originalName=OrderStatus, canonicalName=ORDERSTATUS)=STRING, ColumnId(name=PURCHASEDATE, originalName=PurchaseDate, canonicalName=PURCHASEDATE)=STRING, ColumnId(name=SALESCHANNEL, originalName=SalesChannel, canonicalName=SALESCHANNEL)=STRING, ColumnId(name=AMAZONORDERID, originalName=AmazonOrderId, canonicalName=AMAZONORDERID)=STRING, ColumnId(name=MARKETPLACEID, originalName=MarketplaceId, canonicalName=MARKETPLACEID)=STRING, ColumnId(name=PAYMENTMETHOD, originalName=PaymentMethod, canonicalName=PAYMENTMETHOD)=STRING, ColumnId(name=SELLERORDERID, originalName=SellerOrderId, canonicalName=SELLERORDERID)=STRING, ColumnId(name=ISPREMIUMORDER, originalName=IsPremiumOrder, canonicalName=ISPREMIUMORDER)=STRING, ColumnId(name=LASTUPDATEDATE, originalName=LastUpdateDate, canonicalName=LASTUPDATEDATE)=STRING, ColumnId(name=LATESTSHIPDATE, originalName=LatestShipDate, canonicalName=LATESTSHIPDATE)=STRING, ColumnId(name=_AB_CDC_CURSOR, originalName=_ab_cdc_cursor, canonicalName=_AB_CDC_CURSOR)=INTEGER, ColumnId(name=RESTOCKEDITEMS, originalName=restockedItems, canonicalName=RESTOCKEDITEMS)=Array(items=UNKNOWN), ColumnId(name=ISBUSINESSORDER, originalName=IsBusinessOrder, canonicalName=ISBUSINESSORDER)=STRING, ColumnId(name=REPLACEDORDERID, originalName=ReplacedOrderId, canonicalName=REPLACEDORDERID)=STRING, ColumnId(name=EARLIESTSHIPDATE, originalName=EarliestShipDate, canonicalName=EARLIESTSHIPDATE)=STRING, ColumnId(name=SHIPSERVICELEVEL, originalName=ShipServiceLevel, canonicalName=SHIPSERVICELEVEL)=STRING, ColumnId(name=HASREGULATEDITEMS, originalName=HasRegulatedItems, canonicalName=HASREGULATEDITEMS)=STRING, ColumnId(name=FULFILLMENTCHANNEL, originalName=FulfillmentChannel, canonicalName=FULFILLMENTCHANNEL)=STRING, ColumnId(name=ISREPLACEMENTORDER, originalName=IsReplacementOrder, canonicalName=ISREPLACEMENTORDER)=STRING, ColumnId(name=LATESTDELIVERYDATE, originalName=LatestDeliveryDate, canonicalName=LATESTDELIVERYDATE)=STRING, ColumnId(name=_AB_CDC_DELETED_AT, originalName=_ab_cdc_deleted_at, canonicalName=_AB_CDC_DELETED_AT)=STRING, ColumnId(name=_AB_CDC_UPDATED_AT, originalName=_ab_cdc_updated_at, canonicalName=_AB_CDC_UPDATED_AT)=STRING, ColumnId(name=EARLIESTDELIVERYDATE, originalName=EarliestDeliveryDate, canonicalName=EARLIESTDELIVERYDATE)=STRING, ColumnId(name=NUMBEROFITEMSSHIPPED, originalName=NumberOfItemsShipped, canonicalName=NUMBEROFITEMSSHIPPED)=NUMBER, ColumnId(name=PAYMENTMETHODDETAILS, originalName=PaymentMethodDetails, canonicalName=PAYMENTMETHODDETAILS)=Array(items=UNKNOWN), ColumnId(name=INTEGRATIONACCOUNTID, originalName=integrationAccountId, canonicalName=INTEGRATIONACCOUNTID)=STRING, ColumnId(name=ISGLOBALEXPRESSENABLED, originalName=IsGlobalExpressEnabled, canonicalName=ISGLOBALEXPRESSENABLED)=STRING, ColumnId(name=NUMBEROFITEMSUNSHIPPED, originalName=NumberOfItemsUnshipped, canonicalName=NUMBEROFITEMSUNSHIPPED)=NUMBER, ColumnId(name=SHIPMENTSERVICELEVELCATEGORY, originalName=ShipmentServiceLevelCategory, canonicalName=SHIPMENTSERVICELEVELCATEGORY)=STRING}, generationId=0, minimumGenerationId=0, syncId=0), isFinalTablePresent=true, initialRawTableStatus=InitialRawTableStatus(rawTableExists=true, hasUnprocessedRecords=false, maxProcessedTimestamp=Optional[2024-07-09T03:47:28.822Z]), isSchemaMismatch=false, isFinalTableEmpty=false, destinationState=SnowflakeState(needsSoftReset=false, isAirbyteMetaPresentInRaw=true))] 2024-07-09 05:05:37 destination > INFO sync-operations-2 i.a.i.b.d.t.TyperDeduperUtil(runMigrationsAsync$lambda$12):165 Maybe executing SnowflakeAbMetaAndGenIdMigration migration for stream STAGING.mongo_amazonorders. 2024-07-09 05:05:37 destination > INFO sync-operations-2 i.a.c.d.j.DefaultJdbcDatabase(unsafeQuery$lambda$6):126 closing connection 2024-07-09 05:05:37 destination > INFO sync-operations-2 i.a.i.d.s.m.SnowflakeAbMetaAndGenIdMigration(migrateIfNecessary):86 Skipping airbyte_meta/generation_id migration for STAGING.mongo_amazonorders because the raw table already has the airbyte_meta column 2024-07-09 05:05:37 destination > INFO main i.a.c.d.j.JdbcDatabase(executeWithinTransaction$lambda$1):46 executing query within transaction: delete from "airbyte_internal"."_airbyte_destination_state" where ("name" = 'mongo_amazonorders' and "namespace" = 'STAGING') 2024-07-09 05:05:37 destination > INFO main i.a.c.d.j.JdbcDatabase(executeWithinTransaction$lambda$1):48 done executing query within transaction: delete from "airbyte_internal"."_airbyte_destination_state" where ("name" = 'mongo_amazonorders' and "namespace" = 'STAGING') 2024-07-09 05:05:37 destination > INFO main i.a.c.d.j.JdbcDatabase(executeWithinTransaction$lambda$1):46 executing query within transaction: insert into "airbyte_internal"."_airbyte_destination_state" ("name", "namespace", "destination_state", "updated_at") values ('mongo_amazonorders', 'STAGING', '{"needsSoftReset":false,"airbyteMetaPresentInRaw":true}', '2024-07-09T05:05:37.597218779Z') 2024-07-09 05:05:38 destination > INFO main i.a.c.d.j.JdbcDatabase(executeWithinTransaction$lambda$1):48 done executing query within transaction: insert into "airbyte_internal"."_airbyte_destination_state" ("name", "namespace", "destination_state", "updated_at") values ('mongo_amazonorders', 'STAGING', '{"needsSoftReset":false,"airbyteMetaPresentInRaw":true}', '2024-07-09T05:05:37.597218779Z') 2024-07-09 05:05:38 destination > INFO main i.a.c.d.j.DefaultJdbcDatabase(unsafeQuery$lambda$6):126 closing connection 2024-07-09 05:05:38 destination > INFO main i.a.c.d.j.DefaultJdbcDatabase(unsafeQuery$lambda$6):126 closing connection 2024-07-09 05:05:38 destination > INFO sync-operations-3 i.a.i.d.s.t.SnowflakeDestinationHandler(execute):230 Executing sql 2de36534-fd46-4d62-968c-779fe2beb5ae-2734c932-b4ac-47c3-bcc9-db618460773b: CREATE TABLE IF NOT EXISTS "airbyte_internal"."STAGING_raw__stream_mongo_amazonorders"( "_airbyte_raw_id" VARCHAR PRIMARY KEY, "_airbyte_extracted_at" TIMESTAMP WITH TIME ZONE DEFAULT current_timestamp(), "_airbyte_loaded_at" TIMESTAMP WITH TIME ZONE DEFAULT NULL, "_airbyte_data" VARIANT, "_airbyte_meta" VARIANT DEFAULT NULL, "_airbyte_generation_id" INTEGER DEFAULT NULL ) data_retention_time_in_days = 1; 2024-07-09 05:05:38 destination > INFO sync-operations-3 i.a.i.d.s.t.SnowflakeDestinationHandler(execute):252 Sql 2de36534-fd46-4d62-968c-779fe2beb5ae-2734c932-b4ac-47c3-bcc9-db618460773b completed in 53 ms 2024-07-09 05:05:38 destination > INFO sync-operations-3 i.a.i.b.d.o.AbstractStreamOperation(prepareFinalTable):68 Final Table exists for stream MONGO_AMAZONORDERS 2024-07-09 05:05:39 destination > INFO main i.a.c.d.j.JdbcDatabase(executeWithinTransaction$lambda$1):46 executing query within transaction: delete from "airbyte_internal"."_airbyte_destination_state" where ("name" = 'mongo_amazonorders' and "namespace" = 'STAGING') 2024-07-09 05:05:39 destination > INFO main i.a.c.d.j.JdbcDatabase(executeWithinTransaction$lambda$1):48 done executing query within transaction: delete from "airbyte_internal"."_airbyte_destination_state" where ("name" = 'mongo_amazonorders' and "namespace" = 'STAGING') 2024-07-09 05:05:39 destination > INFO main i.a.c.d.j.JdbcDatabase(executeWithinTransaction$lambda$1):46 executing query within transaction: insert into "airbyte_internal"."_airbyte_destination_state" ("name", "namespace", "destination_state", "updated_at") values ('mongo_amazonorders', 'STAGING', '{"needsSoftReset":false,"airbyteMetaPresentInRaw":true}', '2024-07-09T05:05:38.968909372Z') 2024-07-09 05:05:39 destination > INFO main i.a.c.d.j.JdbcDatabase(executeWithinTransaction$lambda$1):48 done executing query within transaction: insert into "airbyte_internal"."_airbyte_destination_state" ("name", "namespace", "destination_state", "updated_at") values ('mongo_amazonorders', 'STAGING', '{"needsSoftReset":false,"airbyteMetaPresentInRaw":true}', '2024-07-09T05:05:38.968909372Z') 2024-07-09 05:05:40 destination > INFO main i.a.c.i.d.a.b.BufferManager():48 Max 'memory' available for buffer allocation 22 GB 2024-07-09 05:05:40 destination > INFO main i.a.c.i.b.IntegrationRunner$Companion(consumeWriteStream$io_airbyte_airbyte_cdk_java_airbyte_cdk_airbyte_cdk_core):423 Starting buffered read of input stream 2024-07-09 05:05:40 destination > INFO main i.a.c.i.d.a.FlushWorkers(start):73 Start async buffer supervisor 2024-07-09 05:05:40 destination > INFO pool-6-thread-1 i.a.c.i.d.a.b.BufferManager(printQueueInfo):94 [ASYNC QUEUE INFO] Global: max: 22.05 GB, allocated: 10 MB (10.0 MB), %% used: 4.429482636428065E-4 | State Manager memory usage: Allocated: 10 MB, Used: 0 bytes, percentage Used 0.0 2024-07-09 05:05:40 destination > INFO pool-9-thread-1 i.a.c.i.d.a.FlushWorkers(printWorkerInfo):127 [ASYNC WORKER INFO] Pool queue size: 0, Active threads: 0 2024-07-09 05:05:40 destination > INFO main i.a.c.i.d.a.AsyncStreamConsumer(start):88 class io.airbyte.cdk.integrations.destination.async.AsyncStreamConsumer started. 2024-07-09 05:06:26 source > INFO main i.a.i.s.m.c.MongoDbDebeziumStateUtil(isValidResumeToken):119 Valid resume token '82668C96AE000001052B022C0100296E5A100414EB565DFAA44C2B9609085790ED971A46645F69640064668C96AEBCF2CBEDF3BA05160004' present, corresponding to timestamp (seconds after epoch) : 1720489646. Incremental sync will be performed for up-to-date streams. 2024-07-09 05:06:26 source > INFO main i.a.i.s.m.c.MongoDbCdcInitializer(createCdcIterators):134 Valid offset state discovered. Updating state manager with retrieved CDC state {"[\"finaloop-prod\",{\"server_id\":\"finaloop-prod\"}]":"{\"sec\":1720489646,\"ord\":261,\"resume_token\":\"82668C96AE000001052B022C0100296E5A100414EB565DFAA44C2B9609085790ED971A46645F69640064668C96AEBCF2CBEDF3BA05160004\"}"} true... 2024-07-09 05:06:26 source > INFO main i.a.i.s.m.c.MongoDbCdcInitialSnapshotUtils(getStreamsForInitialSnapshot):95 There are 0 stream(s) that are still in progress of an initial snapshot sync. 2024-07-09 05:06:26 source > INFO main i.a.i.s.m.c.MongoDbCdcInitialSnapshotUtils(getStreamsForInitialSnapshot):106 There are 0 stream(s) that have been added to the catalog since the last sync. 2024-07-09 05:06:26 source > INFO main i.a.c.i.d.AirbyteDebeziumHandler(getIncrementalIterators):74 Using CDC: true 2024-07-09 05:06:26 source > INFO main i.a.c.i.d.AirbyteDebeziumHandler(getIncrementalIterators):75 Using DBZ version: 2.6.2.Final 2024-07-09 05:06:26 source > WARN main i.d.e.DebeziumEngine(determineBuilderFactory):346 More than one Debezium engine builder implementation was found, using class io.debezium.embedded.ConvertingEngineBuilderFactory (in Debezium 2.6 you can ignore this warning) 2024-07-09 05:06:26 source > INFO main o.a.k.c.c.AbstractConfig(logAll):370 JsonConverterConfig values: converter.type = key decimal.format = BASE64 replace.null.with.default = true schemas.cache.size = 1000 schemas.enable = false 2024-07-09 05:06:26 source > INFO main o.a.k.c.c.AbstractConfig(logAll):370 JsonConverterConfig values: converter.type = value decimal.format = BASE64 replace.null.with.default = true schemas.cache.size = 1000 schemas.enable = false 2024-07-09 05:06:26 source > INFO main o.a.k.c.c.AbstractConfig(logAll):370 EmbeddedWorkerConfig values: access.control.allow.methods = access.control.allow.origin = admin.listeners = null auto.include.jmx.reporter = true bootstrap.servers = [localhost:9092] client.dns.lookup = use_all_dns_ips config.providers = [] connector.client.config.override.policy = All header.converter = class org.apache.kafka.connect.storage.SimpleHeaderConverter key.converter = class org.apache.kafka.connect.json.JsonConverter listeners = [http://:8083] metric.reporters = [] metrics.num.samples = 2 metrics.recording.level = INFO metrics.sample.window.ms = 30000 offset.flush.interval.ms = 1000 offset.flush.timeout.ms = 5000 offset.storage.file.filename = /tmp/cdc-state-offset14740788892965399274/offset.dat offset.storage.partitions = null offset.storage.replication.factor = null offset.storage.topic = plugin.discovery = hybrid_warn plugin.path = null response.http.headers.config = rest.advertised.host.name = null rest.advertised.listener = null rest.advertised.port = null rest.extension.classes = [] ssl.cipher.suites = null ssl.client.auth = none ssl.enabled.protocols = [TLSv1.2, TLSv1.3] ssl.endpoint.identification.algorithm = https ssl.engine.factory.class = null ssl.key.password = null ssl.keymanager.algorithm = SunX509 ssl.keystore.certificate.chain = null ssl.keystore.key = null ssl.keystore.location = null ssl.keystore.password = null ssl.keystore.type = JKS ssl.protocol = TLSv1.3 ssl.provider = null ssl.secure.random.implementation = null ssl.trustmanager.algorithm = PKIX ssl.truststore.certificates = null ssl.truststore.location = null ssl.truststore.password = null ssl.truststore.type = JKS task.shutdown.graceful.timeout.ms = 5000 topic.creation.enable = true topic.tracking.allow.reset = true topic.tracking.enable = true value.converter = class org.apache.kafka.connect.json.JsonConverter 2024-07-09 05:06:26 source > INFO main o.a.k.c.c.AbstractConfig(logAll):370 JsonConverterConfig values: converter.type = key decimal.format = BASE64 replace.null.with.default = true schemas.cache.size = 1000 schemas.enable = false 2024-07-09 05:06:26 source > INFO main o.a.k.c.c.AbstractConfig(logAll):370 JsonConverterConfig values: converter.type = value decimal.format = BASE64 replace.null.with.default = false schemas.cache.size = 1000 schemas.enable = false 2024-07-09 05:06:26 replication-orchestrator > Attempt 0 to stream status started finaloop-prod:amazonorders 2024-07-09 05:06:26 source > INFO pool-2-thread-1 i.d.c.CommonConnectorConfig(getSourceInfoStructMaker):1649 Loading the custom source info struct maker plugin: io.debezium.connector.mongodb.MongoDbSourceInfoStructMaker 2024-07-09 05:06:26 source > INFO pool-2-thread-1 i.d.c.CommonConnectorConfig(getSourceInfoStructMaker):1649 Loading the custom source info struct maker plugin: io.debezium.connector.mongodb.MongoDbSourceInfoStructMaker 2024-07-09 05:06:26 source > INFO pool-2-thread-1 i.d.c.CommonConnectorConfig(getSourceInfoStructMaker):1649 Loading the custom source info struct maker plugin: io.debezium.connector.mongodb.MongoDbSourceInfoStructMaker 2024-07-09 05:06:26 source > INFO pool-2-thread-1 c.m.i.d.l.SLF4JLogger(info):71 MongoClient with metadata {"driver": {"name": "mongo-java-driver|sync", "version": "4.11.0"}, "os": {"type": "Linux", "name": "Linux", "architecture": "amd64", "version": "5.10.217-205.860.amzn2.x86_64"}, "platform": "Java/Amazon.com Inc./21.0.3+9-LTS"} created with settings MongoClientSettings{readPreference=primary, writeConcern=WriteConcern{w=null, wTimeout=null ms, journal=null}, retryWrites=true, retryReads=true, readConcern=ReadConcern{level=null}, credential=MongoCredential{mechanism=null, userName='AIRBYTE_USER', source='admin', password=, mechanismProperties=}, transportSettings=null, streamFactoryFactory=null, commandListeners=[], codecRegistry=ProvidersCodecRegistry{codecProviders=[ValueCodecProvider{}, BsonValueCodecProvider{}, DBRefCodecProvider{}, DBObjectCodecProvider{}, DocumentCodecProvider{}, CollectionCodecProvider{}, IterableCodecProvider{}, MapCodecProvider{}, GeoJsonCodecProvider{}, GridFSFileCodecProvider{}, Jsr310CodecProvider{}, JsonObjectCodecProvider{}, BsonCodecProvider{}, EnumCodecProvider{}, com.mongodb.client.model.mql.ExpressionCodecProvider@7c2327fa, com.mongodb.Jep395RecordCodecProvider@4d847d32, com.mongodb.KotlinCodecProvider@5f462e3b]}, loggerSettings=LoggerSettings{maxDocumentLength=1000}, clusterSettings={hosts=[127.0.0.1:27017], srvHost=finaloop-prod-pl-0.xnbzo.mongodb.net, srvServiceName=mongodb, mode=MULTIPLE, requiredClusterType=REPLICA_SET, requiredReplicaSetName='atlas-elry3t-shard-0', serverSelector='null', clusterListeners='[]', serverSelectionTimeout='30000 ms', localThreshold='15 ms'}, socketSettings=SocketSettings{connectTimeoutMS=10000, readTimeoutMS=0, receiveBufferSize=0, proxySettings=ProxySettings{host=null, port=null, username=null, password=null}}, heartbeatSocketSettings=SocketSettings{connectTimeoutMS=10000, readTimeoutMS=10000, receiveBufferSize=0, proxySettings=ProxySettings{host=null, port=null, username=null, password=null}}, connectionPoolSettings=ConnectionPoolSettings{maxSize=100, minSize=0, maxWaitTimeMS=120000, maxConnectionLifeTimeMS=0, maxConnectionIdleTimeMS=0, maintenanceInitialDelayMS=0, maintenanceFrequencyMS=60000, connectionPoolListeners=[], maxConnecting=2}, serverSettings=ServerSettings{heartbeatFrequencyMS=10000, minHeartbeatFrequencyMS=500, serverListeners='[]', serverMonitorListeners='[]'}, sslSettings=SslSettings{enabled=true, invalidHostNameAllowed=false, context=javax.net.ssl.SSLContext@76e9db8d}, applicationName='null', compressorList=[], uuidRepresentation=STANDARD, serverApi=null, autoEncryptionSettings=null, dnsClient=null, inetAddressResolver=null, contextProvider=null} 2024-07-09 05:06:26 source > INFO cluster-ClusterId{value='668cc5524ad176664dffe560', description='null'}-srv-finaloop-prod-pl-0.xnbzo.mongodb.net c.m.i.d.l.SLF4JLogger(info):71 Adding discovered server pl-0-us-east-1.xnbzo.mongodb.net:1026 to client view of cluster 2024-07-09 05:06:26 source > INFO cluster-ClusterId{value='668cc5524ad176664dffe560', description='null'}-srv-finaloop-prod-pl-0.xnbzo.mongodb.net c.m.i.d.l.SLF4JLogger(info):71 Adding discovered server pl-0-us-east-1.xnbzo.mongodb.net:1025 to client view of cluster 2024-07-09 05:06:26 source > INFO pool-2-thread-1 c.m.i.d.l.SLF4JLogger(info):71 No server chosen by ReadPreferenceServerSelector{readPreference=primary} from cluster description ClusterDescription{type=UNKNOWN, connectionMode=MULTIPLE, serverDescriptions=[]}. Waiting for 30000 ms before timing out 2024-07-09 05:06:26 source > INFO cluster-ClusterId{value='668cc5524ad176664dffe560', description='null'}-srv-finaloop-prod-pl-0.xnbzo.mongodb.net c.m.i.d.l.SLF4JLogger(info):71 Adding discovered server pl-0-us-east-1.xnbzo.mongodb.net:1024 to client view of cluster 2024-07-09 05:06:27 source > INFO cluster-ClusterId{value='668cc5524ad176664dffe560', description='null'}-pl-0-us-east-1.xnbzo.mongodb.net:1025 c.m.i.d.l.SLF4JLogger(info):71 Monitor thread successfully connected to server with description ServerDescription{address=pl-0-us-east-1.xnbzo.mongodb.net:1025, type=REPLICA_SET_PRIMARY, state=CONNECTED, ok=true, minWireVersion=0, maxWireVersion=17, maxDocumentSize=16777216, logicalSessionTimeoutMinutes=30, roundTripTimeNanos=32837377, setName='atlas-elry3t-shard-0', canonicalAddress=pl-0-us-east-1.xnbzo.mongodb.net:1025, hosts=[pl-0-us-east-1.xnbzo.mongodb.net:1025, pl-0-us-east-1.xnbzo.mongodb.net:1026, pl-0-us-east-1.xnbzo.mongodb.net:1024], passives=[], arbiters=[], primary='pl-0-us-east-1.xnbzo.mongodb.net:1025', tagSet=TagSet{[Tag{name='availabilityZone', value='use1-az4'}, Tag{name='diskState', value='READY'}, Tag{name='nodeType', value='ELECTABLE'}, Tag{name='provider', value='AWS'}, Tag{name='region', value='US_EAST_1'}, Tag{name='workloadType', value='OPERATIONAL'}]}, electionId=7fffffff0000000000000070, setVersion=4, topologyVersion=TopologyVersion{processId=66801612bcf2cbedf3003b29, counter=6}, lastWriteDate=Tue Jul 09 05:06:26 UTC 2024, lastUpdateTimeNanos=2789147021856805} 2024-07-09 05:06:27 source > INFO cluster-ClusterId{value='668cc5524ad176664dffe560', description='null'}-pl-0-us-east-1.xnbzo.mongodb.net:1025 c.m.i.d.l.SLF4JLogger(info):71 Discovered replica set primary pl-0-us-east-1.xnbzo.mongodb.net:1025 with max election id 7fffffff0000000000000070 and max set version 4 2024-07-09 05:06:27 source > INFO cluster-ClusterId{value='668cc5524ad176664dffe560', description='null'}-pl-0-us-east-1.xnbzo.mongodb.net:1026 c.m.i.d.l.SLF4JLogger(info):71 Monitor thread successfully connected to server with description ServerDescription{address=pl-0-us-east-1.xnbzo.mongodb.net:1026, type=REPLICA_SET_SECONDARY, state=CONNECTED, ok=true, minWireVersion=0, maxWireVersion=17, maxDocumentSize=16777216, logicalSessionTimeoutMinutes=30, roundTripTimeNanos=37551437, setName='atlas-elry3t-shard-0', canonicalAddress=pl-0-us-east-1.xnbzo.mongodb.net:1026, hosts=[pl-0-us-east-1.xnbzo.mongodb.net:1025, pl-0-us-east-1.xnbzo.mongodb.net:1026, pl-0-us-east-1.xnbzo.mongodb.net:1024], passives=[], arbiters=[], primary='pl-0-us-east-1.xnbzo.mongodb.net:1025', tagSet=TagSet{[Tag{name='availabilityZone', value='use1-az5'}, Tag{name='diskState', value='READY'}, Tag{name='nodeType', value='ELECTABLE'}, Tag{name='provider', value='AWS'}, Tag{name='region', value='US_EAST_1'}, Tag{name='workloadType', value='OPERATIONAL'}]}, electionId=null, setVersion=4, topologyVersion=TopologyVersion{processId=668016ba8d3144b6ca265470, counter=3}, lastWriteDate=Tue Jul 09 05:06:26 UTC 2024, lastUpdateTimeNanos=2789147026897303} 2024-07-09 05:06:27 source > INFO cluster-ClusterId{value='668cc5524ad176664dffe560', description='null'}-pl-0-us-east-1.xnbzo.mongodb.net:1024 c.m.i.d.l.SLF4JLogger(info):71 Monitor thread successfully connected to server with description ServerDescription{address=pl-0-us-east-1.xnbzo.mongodb.net:1024, type=REPLICA_SET_SECONDARY, state=CONNECTED, ok=true, minWireVersion=0, maxWireVersion=17, maxDocumentSize=16777216, logicalSessionTimeoutMinutes=30, roundTripTimeNanos=38462885, setName='atlas-elry3t-shard-0', canonicalAddress=pl-0-us-east-1.xnbzo.mongodb.net:1024, hosts=[pl-0-us-east-1.xnbzo.mongodb.net:1025, pl-0-us-east-1.xnbzo.mongodb.net:1026, pl-0-us-east-1.xnbzo.mongodb.net:1024], passives=[], arbiters=[], primary='pl-0-us-east-1.xnbzo.mongodb.net:1025', tagSet=TagSet{[Tag{name='availabilityZone', value='use1-az6'}, Tag{name='diskState', value='READY'}, Tag{name='nodeType', value='ELECTABLE'}, Tag{name='provider', value='AWS'}, Tag{name='region', value='US_EAST_1'}, Tag{name='workloadType', value='OPERATIONAL'}]}, electionId=null, setVersion=4, topologyVersion=TopologyVersion{processId=668015533bce4fadbd70ae1c, counter=4}, lastWriteDate=Tue Jul 09 05:06:26 UTC 2024, lastUpdateTimeNanos=2789147028555756} 2024-07-09 05:06:27 source > INFO pool-2-thread-1 o.a.k.c.c.AbstractConfig(logAll):370 JsonConverterConfig values: converter.type = key decimal.format = BASE64 replace.null.with.default = true schemas.cache.size = 1000 schemas.enable = false 2024-07-09 05:06:27 source > INFO pool-2-thread-1 o.a.k.c.s.FileOffsetBackingStore(start):63 Starting FileOffsetBackingStore with file /tmp/cdc-state-offset14740788892965399274/offset.dat 2024-07-09 05:06:27 source > INFO pool-2-thread-1 i.d.c.m.MongoDbConnector(start):67 Successfully started MongoDB connector 2024-07-09 05:06:27 source > INFO pool-2-thread-1 i.a.c.i.d.i.DebeziumRecordPublisher$start$3(connectorStarted):79 DebeziumEngine notify: connector started 2024-07-09 05:06:27 source > INFO pool-2-thread-1 i.d.c.c.BaseSourceTask(start):242 Starting MongoDbConnectorTask with configuration: 2024-07-09 05:06:27 source > INFO pool-2-thread-1 i.d.c.c.BaseSourceTask(lambda$start$0):244 connector.class = io.debezium.connector.mongodb.MongoDbConnector 2024-07-09 05:06:27 source > INFO pool-2-thread-1 i.d.c.c.BaseSourceTask(lambda$start$0):244 max.queue.size = 8192 2024-07-09 05:06:27 source > INFO pool-2-thread-1 i.d.c.c.BaseSourceTask(lambda$start$0):244 collection.include.list = finaloop-prod\.amazonorders 2024-07-09 05:06:27 source > INFO pool-2-thread-1 i.d.c.c.BaseSourceTask(lambda$start$0):244 mongodb.connection.mode = sharded 2024-07-09 05:06:27 source > INFO pool-2-thread-1 i.d.c.c.BaseSourceTask(lambda$start$0):244 mongodb.connection.string = ******** 2024-07-09 05:06:27 source > INFO pool-2-thread-1 i.d.c.c.BaseSourceTask(lambda$start$0):244 mongodb.password = ******** 2024-07-09 05:06:27 source > INFO pool-2-thread-1 i.d.c.c.BaseSourceTask(lambda$start$0):244 capture.mode = change_streams_update_full_with_pre_image 2024-07-09 05:06:27 source > INFO pool-2-thread-1 i.d.c.c.BaseSourceTask(lambda$start$0):244 mongodb.ssl.enabled = true 2024-07-09 05:06:27 source > INFO pool-2-thread-1 i.d.c.c.BaseSourceTask(lambda$start$0):244 value.converter.replace.null.with.default = false 2024-07-09 05:06:27 source > INFO pool-2-thread-1 i.d.c.c.BaseSourceTask(lambda$start$0):244 tombstones.on.delete = false 2024-07-09 05:06:27 source > INFO pool-2-thread-1 i.d.c.c.BaseSourceTask(lambda$start$0):244 topic.prefix = finaloop-prod 2024-07-09 05:06:27 source > INFO pool-2-thread-1 i.d.c.c.BaseSourceTask(lambda$start$0):244 offset.storage.file.filename = /tmp/cdc-state-offset14740788892965399274/offset.dat 2024-07-09 05:06:27 source > INFO pool-2-thread-1 i.d.c.c.BaseSourceTask(lambda$start$0):244 decimal.handling.mode = string 2024-07-09 05:06:27 source > INFO pool-2-thread-1 i.d.c.c.BaseSourceTask(lambda$start$0):244 capture.scope = database 2024-07-09 05:06:27 source > INFO pool-2-thread-1 i.d.c.c.BaseSourceTask(lambda$start$0):244 mongodb.authsource = admin 2024-07-09 05:06:27 source > INFO pool-2-thread-1 i.d.c.c.BaseSourceTask(lambda$start$0):244 errors.retry.delay.initial.ms = 299 2024-07-09 05:06:27 source > INFO pool-2-thread-1 i.d.c.c.BaseSourceTask(lambda$start$0):244 value.converter = org.apache.kafka.connect.json.JsonConverter 2024-07-09 05:06:27 source > INFO pool-2-thread-1 i.d.c.c.BaseSourceTask(lambda$start$0):244 key.converter = org.apache.kafka.connect.json.JsonConverter 2024-07-09 05:06:27 source > INFO pool-2-thread-1 i.d.c.c.BaseSourceTask(lambda$start$0):244 offset.storage = org.apache.kafka.connect.storage.FileOffsetBackingStore 2024-07-09 05:06:27 source > INFO pool-2-thread-1 i.d.c.c.BaseSourceTask(lambda$start$0):244 max.queue.size.in.bytes = 268435456 2024-07-09 05:06:27 source > INFO pool-2-thread-1 i.d.c.c.BaseSourceTask(lambda$start$0):244 mongodb.user = AIRBYTE_USER 2024-07-09 05:06:27 source > INFO pool-2-thread-1 i.d.c.c.BaseSourceTask(lambda$start$0):244 errors.retry.delay.max.ms = 300 2024-07-09 05:06:27 source > INFO pool-2-thread-1 i.d.c.c.BaseSourceTask(lambda$start$0):244 offset.flush.timeout.ms = 5000 2024-07-09 05:06:27 source > INFO pool-2-thread-1 i.d.c.c.BaseSourceTask(lambda$start$0):244 heartbeat.interval.ms = 10000 2024-07-09 05:06:27 source > INFO pool-2-thread-1 i.d.c.c.BaseSourceTask(lambda$start$0):244 offset.flush.interval.ms = 1000 2024-07-09 05:06:27 source > INFO pool-2-thread-1 i.d.c.c.BaseSourceTask(lambda$start$0):244 key.converter.schemas.enable = false 2024-07-09 05:06:27 source > INFO pool-2-thread-1 i.d.c.c.BaseSourceTask(lambda$start$0):244 errors.max.retries = 0 2024-07-09 05:06:27 source > INFO pool-2-thread-1 i.d.c.c.BaseSourceTask(lambda$start$0):244 name = finaloop-prod 2024-07-09 05:06:27 source > INFO pool-2-thread-1 i.d.c.c.BaseSourceTask(lambda$start$0):244 value.converter.schemas.enable = false 2024-07-09 05:06:27 source > INFO pool-2-thread-1 i.d.c.c.BaseSourceTask(lambda$start$0):244 capture.target = finaloop-prod 2024-07-09 05:06:27 source > INFO pool-2-thread-1 i.d.c.c.BaseSourceTask(lambda$start$0):244 max.batch.size = 2048 2024-07-09 05:06:27 source > INFO pool-2-thread-1 i.d.c.c.BaseSourceTask(lambda$start$0):244 snapshot.mode = never 2024-07-09 05:06:27 source > INFO pool-2-thread-1 i.d.c.c.BaseSourceTask(lambda$start$0):244 database.include.list = finaloop-prod 2024-07-09 05:06:27 source > INFO pool-2-thread-1 i.d.c.CommonConnectorConfig(getSourceInfoStructMaker):1649 Loading the custom source info struct maker plugin: io.debezium.connector.mongodb.MongoDbSourceInfoStructMaker 2024-07-09 05:06:27 source > INFO pool-2-thread-1 i.d.c.CommonConnectorConfig(getSourceInfoStructMaker):1649 Loading the custom source info struct maker plugin: io.debezium.connector.mongodb.MongoDbSourceInfoStructMaker 2024-07-09 05:06:27 source > INFO pool-2-thread-1 i.d.c.CommonConnectorConfig(getSourceInfoStructMaker):1649 Loading the custom source info struct maker plugin: io.debezium.connector.mongodb.MongoDbSourceInfoStructMaker 2024-07-09 05:06:27 source > INFO pool-2-thread-1 i.d.c.CommonConnectorConfig(getSourceInfoStructMaker):1649 Loading the custom source info struct maker plugin: io.debezium.connector.mongodb.MongoDbSourceInfoStructMaker 2024-07-09 05:06:27 source > INFO pool-2-thread-1 i.d.c.CommonConnectorConfig(getTopicNamingStrategy):1357 Loading the custom topic naming strategy plugin: io.debezium.schema.DefaultTopicNamingStrategy 2024-07-09 05:06:27 source > INFO pool-2-thread-1 i.d.c.CommonConnectorConfig(getSourceInfoStructMaker):1649 Loading the custom source info struct maker plugin: io.debezium.connector.mongodb.MongoDbSourceInfoStructMaker 2024-07-09 05:06:27 source > INFO pool-2-thread-1 i.d.c.CommonConnectorConfig(getSourceInfoStructMaker):1649 Loading the custom source info struct maker plugin: io.debezium.connector.mongodb.MongoDbSourceInfoStructMaker 2024-07-09 05:06:27 source > INFO pool-2-thread-1 i.d.c.CommonConnectorConfig(getSourceInfoStructMaker):1649 Loading the custom source info struct maker plugin: io.debezium.connector.mongodb.MongoDbSourceInfoStructMaker 2024-07-09 05:06:27 source > INFO pool-2-thread-1 i.d.c.c.BaseSourceTask(getPreviousOffsets):501 Found previous partition offset MongoDbPartition [sourcePartition={server_id=finaloop-prod}]: {sec=1720489646, ord=261, resume_token=82668C96AE000001052B022C0100296E5A100414EB565DFAA44C2B9609085790ED971A46645F69640064668C96AEBCF2CBEDF3BA05160004} 2024-07-09 05:06:27 source > INFO pool-2-thread-1 i.d.u.Threads(threadFactory):271 Requested thread factory for connector MongoDbConnector, id = finaloop-prod named = SignalProcessor 2024-07-09 05:06:27 source > WARN pool-2-thread-1 i.d.s.SnapshotLockProvider(lambda$createService$3):82 Found a not connector specific implementation io.debezium.snapshot.lock.NoLockingSupport for lock mode no_locking_support 2024-07-09 05:06:27 source > INFO pool-2-thread-1 i.d.c.CommonConnectorConfig(getSourceInfoStructMaker):1649 Loading the custom source info struct maker plugin: io.debezium.connector.mongodb.MongoDbSourceInfoStructMaker 2024-07-09 05:06:27 source > INFO pool-2-thread-1 i.d.c.CommonConnectorConfig(getSourceInfoStructMaker):1649 Loading the custom source info struct maker plugin: io.debezium.connector.mongodb.MongoDbSourceInfoStructMaker 2024-07-09 05:06:27 source > INFO pool-2-thread-1 i.d.c.CommonConnectorConfig(getSourceInfoStructMaker):1649 Loading the custom source info struct maker plugin: io.debezium.connector.mongodb.MongoDbSourceInfoStructMaker 2024-07-09 05:06:27 source > INFO pool-2-thread-1 i.d.c.CommonConnectorConfig(getSourceInfoStructMaker):1649 Loading the custom source info struct maker plugin: io.debezium.connector.mongodb.MongoDbSourceInfoStructMaker 2024-07-09 05:06:27 source > INFO pool-2-thread-1 i.d.c.m.c.MongoDbConnection(validateLogPosition):192 Found existing offset for at {sec=1720489646, ord=261, resume_token=82668C96AE000001052B022C0100296E5A100414EB565DFAA44C2B9609085790ED971A46645F69640064668C96AEBCF2CBEDF3BA05160004} 2024-07-09 05:06:27 source > INFO pool-2-thread-1 c.m.i.d.l.SLF4JLogger(info):71 MongoClient with metadata {"driver": {"name": "mongo-java-driver|sync", "version": "4.11.0"}, "os": {"type": "Linux", "name": "Linux", "architecture": "amd64", "version": "5.10.217-205.860.amzn2.x86_64"}, "platform": "Java/Amazon.com Inc./21.0.3+9-LTS"} created with settings MongoClientSettings{readPreference=primary, writeConcern=WriteConcern{w=null, wTimeout=null ms, journal=null}, retryWrites=true, retryReads=true, readConcern=ReadConcern{level=null}, credential=MongoCredential{mechanism=null, userName='AIRBYTE_USER', source='admin', password=, mechanismProperties=}, transportSettings=null, streamFactoryFactory=null, commandListeners=[], codecRegistry=ProvidersCodecRegistry{codecProviders=[ValueCodecProvider{}, BsonValueCodecProvider{}, DBRefCodecProvider{}, DBObjectCodecProvider{}, DocumentCodecProvider{}, CollectionCodecProvider{}, IterableCodecProvider{}, MapCodecProvider{}, GeoJsonCodecProvider{}, GridFSFileCodecProvider{}, Jsr310CodecProvider{}, JsonObjectCodecProvider{}, BsonCodecProvider{}, EnumCodecProvider{}, com.mongodb.client.model.mql.ExpressionCodecProvider@7c2327fa, com.mongodb.Jep395RecordCodecProvider@4d847d32, com.mongodb.KotlinCodecProvider@5f462e3b]}, loggerSettings=LoggerSettings{maxDocumentLength=1000}, clusterSettings={hosts=[127.0.0.1:27017], srvHost=finaloop-prod-pl-0.xnbzo.mongodb.net, srvServiceName=mongodb, mode=MULTIPLE, requiredClusterType=REPLICA_SET, requiredReplicaSetName='atlas-elry3t-shard-0', serverSelector='null', clusterListeners='[]', serverSelectionTimeout='30000 ms', localThreshold='15 ms'}, socketSettings=SocketSettings{connectTimeoutMS=10000, readTimeoutMS=0, receiveBufferSize=0, proxySettings=ProxySettings{host=null, port=null, username=null, password=null}}, heartbeatSocketSettings=SocketSettings{connectTimeoutMS=10000, readTimeoutMS=10000, receiveBufferSize=0, proxySettings=ProxySettings{host=null, port=null, username=null, password=null}}, connectionPoolSettings=ConnectionPoolSettings{maxSize=100, minSize=0, maxWaitTimeMS=120000, maxConnectionLifeTimeMS=0, maxConnectionIdleTimeMS=0, maintenanceInitialDelayMS=0, maintenanceFrequencyMS=60000, connectionPoolListeners=[], maxConnecting=2}, serverSettings=ServerSettings{heartbeatFrequencyMS=10000, minHeartbeatFrequencyMS=500, serverListeners='[]', serverMonitorListeners='[]'}, sslSettings=SslSettings{enabled=true, invalidHostNameAllowed=false, context=javax.net.ssl.SSLContext@4ef77ea7}, applicationName='null', compressorList=[], uuidRepresentation=STANDARD, serverApi=null, autoEncryptionSettings=null, dnsClient=null, inetAddressResolver=null, contextProvider=null} 2024-07-09 05:06:27 source > INFO cluster-ClusterId{value='668cc5534ad176664dffe561', description='null'}-srv-finaloop-prod-pl-0.xnbzo.mongodb.net c.m.i.d.l.SLF4JLogger(info):71 Adding discovered server pl-0-us-east-1.xnbzo.mongodb.net:1026 to client view of cluster 2024-07-09 05:06:27 source > INFO cluster-ClusterId{value='668cc5534ad176664dffe561', description='null'}-srv-finaloop-prod-pl-0.xnbzo.mongodb.net c.m.i.d.l.SLF4JLogger(info):71 Adding discovered server pl-0-us-east-1.xnbzo.mongodb.net:1025 to client view of cluster 2024-07-09 05:06:27 source > INFO cluster-ClusterId{value='668cc5534ad176664dffe561', description='null'}-srv-finaloop-prod-pl-0.xnbzo.mongodb.net c.m.i.d.l.SLF4JLogger(info):71 Adding discovered server pl-0-us-east-1.xnbzo.mongodb.net:1024 to client view of cluster 2024-07-09 05:06:27 source > INFO pool-2-thread-1 i.d.c.m.ChangeStreamPipelineFactory(create):56 Effective change stream pipeline: [{"$replaceRoot": {"newRoot": {"event": "$$ROOT", "namespace": {"$concat": ["$ns.db", ".", "$ns.coll"]}}}}, {"$match": {"$and": [{"$and": [{"event.ns.db": {"$regularExpression": {"pattern": "finaloop-prod", "options": "i"}}}, {"namespace": {"$regularExpression": {"pattern": "finaloop-prod\\.amazonorders", "options": "i"}}}]}, {"event.operationType": {"$in": ["insert", "update", "replace", "delete"]}}]}}, {"$replaceRoot": {"newRoot": "$event"}}] 2024-07-09 05:06:27 source > INFO pool-2-thread-1 i.d.c.m.MongoUtils(openChangeStream):219 Change stream is restricted to 'finaloop-prod' database 2024-07-09 05:06:27 source > INFO pool-2-thread-1 c.m.i.d.l.SLF4JLogger(info):71 No server chosen by ReadPreferenceServerSelector{readPreference=primary} from cluster description ClusterDescription{type=REPLICA_SET, connectionMode=MULTIPLE, serverDescriptions=[ServerDescription{address=pl-0-us-east-1.xnbzo.mongodb.net:1026, type=UNKNOWN, state=CONNECTING}, ServerDescription{address=pl-0-us-east-1.xnbzo.mongodb.net:1025, type=UNKNOWN, state=CONNECTING}, ServerDescription{address=pl-0-us-east-1.xnbzo.mongodb.net:1024, type=UNKNOWN, state=CONNECTING}]}. Waiting for 30000 ms before timing out 2024-07-09 05:06:27 source > INFO cluster-ClusterId{value='668cc5534ad176664dffe561', description='null'}-pl-0-us-east-1.xnbzo.mongodb.net:1025 c.m.i.d.l.SLF4JLogger(info):71 Monitor thread successfully connected to server with description ServerDescription{address=pl-0-us-east-1.xnbzo.mongodb.net:1025, type=REPLICA_SET_PRIMARY, state=CONNECTED, ok=true, minWireVersion=0, maxWireVersion=17, maxDocumentSize=16777216, logicalSessionTimeoutMinutes=30, roundTripTimeNanos=32868900, setName='atlas-elry3t-shard-0', canonicalAddress=pl-0-us-east-1.xnbzo.mongodb.net:1025, hosts=[pl-0-us-east-1.xnbzo.mongodb.net:1025, pl-0-us-east-1.xnbzo.mongodb.net:1026, pl-0-us-east-1.xnbzo.mongodb.net:1024], passives=[], arbiters=[], primary='pl-0-us-east-1.xnbzo.mongodb.net:1025', tagSet=TagSet{[Tag{name='availabilityZone', value='use1-az4'}, Tag{name='diskState', value='READY'}, Tag{name='nodeType', value='ELECTABLE'}, Tag{name='provider', value='AWS'}, Tag{name='region', value='US_EAST_1'}, Tag{name='workloadType', value='OPERATIONAL'}]}, electionId=7fffffff0000000000000070, setVersion=4, topologyVersion=TopologyVersion{processId=66801612bcf2cbedf3003b29, counter=6}, lastWriteDate=Tue Jul 09 05:06:26 UTC 2024, lastUpdateTimeNanos=2789147156985828} 2024-07-09 05:06:27 source > INFO cluster-ClusterId{value='668cc5534ad176664dffe561', description='null'}-pl-0-us-east-1.xnbzo.mongodb.net:1026 c.m.i.d.l.SLF4JLogger(info):71 Monitor thread successfully connected to server with description ServerDescription{address=pl-0-us-east-1.xnbzo.mongodb.net:1026, type=REPLICA_SET_SECONDARY, state=CONNECTED, ok=true, minWireVersion=0, maxWireVersion=17, maxDocumentSize=16777216, logicalSessionTimeoutMinutes=30, roundTripTimeNanos=34655636, setName='atlas-elry3t-shard-0', canonicalAddress=pl-0-us-east-1.xnbzo.mongodb.net:1026, hosts=[pl-0-us-east-1.xnbzo.mongodb.net:1025, pl-0-us-east-1.xnbzo.mongodb.net:1026, pl-0-us-east-1.xnbzo.mongodb.net:1024], passives=[], arbiters=[], primary='pl-0-us-east-1.xnbzo.mongodb.net:1025', tagSet=TagSet{[Tag{name='availabilityZone', value='use1-az5'}, Tag{name='diskState', value='READY'}, Tag{name='nodeType', value='ELECTABLE'}, Tag{name='provider', value='AWS'}, Tag{name='region', value='US_EAST_1'}, Tag{name='workloadType', value='OPERATIONAL'}]}, electionId=null, setVersion=4, topologyVersion=TopologyVersion{processId=668016ba8d3144b6ca265470, counter=3}, lastWriteDate=Tue Jul 09 05:06:26 UTC 2024, lastUpdateTimeNanos=2789147157444163} 2024-07-09 05:06:27 source > INFO cluster-ClusterId{value='668cc5534ad176664dffe561', description='null'}-pl-0-us-east-1.xnbzo.mongodb.net:1025 c.m.i.d.l.SLF4JLogger(info):71 Discovered replica set primary pl-0-us-east-1.xnbzo.mongodb.net:1025 with max election id 7fffffff0000000000000070 and max set version 4 2024-07-09 05:06:27 source > INFO cluster-ClusterId{value='668cc5534ad176664dffe561', description='null'}-pl-0-us-east-1.xnbzo.mongodb.net:1024 c.m.i.d.l.SLF4JLogger(info):71 Monitor thread successfully connected to server with description ServerDescription{address=pl-0-us-east-1.xnbzo.mongodb.net:1024, type=REPLICA_SET_SECONDARY, state=CONNECTED, ok=true, minWireVersion=0, maxWireVersion=17, maxDocumentSize=16777216, logicalSessionTimeoutMinutes=30, roundTripTimeNanos=35632356, setName='atlas-elry3t-shard-0', canonicalAddress=pl-0-us-east-1.xnbzo.mongodb.net:1024, hosts=[pl-0-us-east-1.xnbzo.mongodb.net:1025, pl-0-us-east-1.xnbzo.mongodb.net:1026, pl-0-us-east-1.xnbzo.mongodb.net:1024], passives=[], arbiters=[], primary='pl-0-us-east-1.xnbzo.mongodb.net:1025', tagSet=TagSet{[Tag{name='availabilityZone', value='use1-az6'}, Tag{name='diskState', value='READY'}, Tag{name='nodeType', value='ELECTABLE'}, Tag{name='provider', value='AWS'}, Tag{name='region', value='US_EAST_1'}, Tag{name='workloadType', value='OPERATIONAL'}]}, electionId=null, setVersion=4, topologyVersion=TopologyVersion{processId=668015533bce4fadbd70ae1c, counter=4}, lastWriteDate=Tue Jul 09 05:06:26 UTC 2024, lastUpdateTimeNanos=2789147161923304} 2024-07-09 05:06:40 destination > INFO pool-6-thread-1 i.a.c.i.d.a.b.BufferManager(printQueueInfo):94 [ASYNC QUEUE INFO] Global: max: 22.05 GB, allocated: 10 MB (10.0 MB), %% used: 4.429482636428065E-4 | State Manager memory usage: Allocated: 10 MB, Used: 0 bytes, percentage Used 0.0 2024-07-09 05:06:40 destination > INFO pool-9-thread-1 i.a.c.i.d.a.FlushWorkers(printWorkerInfo):127 [ASYNC WORKER INFO] Pool queue size: 0, Active threads: 0 2024-07-09 05:07:40 destination > INFO pool-6-thread-1 i.a.c.i.d.a.b.BufferManager(printQueueInfo):94 [ASYNC QUEUE INFO] Global: max: 22.05 GB, allocated: 10 MB (10.0 MB), %% used: 4.429482636428065E-4 | State Manager memory usage: Allocated: 10 MB, Used: 0 bytes, percentage Used 0.0 2024-07-09 05:07:40 destination > INFO pool-9-thread-1 i.a.c.i.d.a.FlushWorkers(printWorkerInfo):127 [ASYNC WORKER INFO] Pool queue size: 0, Active threads: 0 2024-07-09 05:08:40 destination > INFO pool-6-thread-1 i.a.c.i.d.a.b.BufferManager(printQueueInfo):94 [ASYNC QUEUE INFO] Global: max: 22.05 GB, allocated: 10 MB (10.0 MB), %% used: 4.429482636428065E-4 | State Manager memory usage: Allocated: 10 MB, Used: 0 bytes, percentage Used 0.0 2024-07-09 05:08:40 destination > INFO pool-9-thread-1 i.a.c.i.d.a.FlushWorkers(printWorkerInfo):127 [ASYNC WORKER INFO] Pool queue size: 0, Active threads: 0 2024-07-09 05:09:35 source > INFO pool-2-thread-1 i.d.c.m.c.MongoDbConnection(lambda$isValidResumeToken$7):210 Valid resume token present, so no snapshot will be performed' 2024-07-09 05:09:35 source > INFO pool-2-thread-1 i.d.u.Threads(threadFactory):271 Requested thread factory for connector MongoDbConnector, id = finaloop-prod named = change-event-source-coordinator 2024-07-09 05:09:35 source > INFO pool-2-thread-1 i.d.u.Threads(threadFactory):271 Requested thread factory for connector MongoDbConnector, id = finaloop-prod named = blocking-snapshot 2024-07-09 05:09:35 source > INFO pool-2-thread-1 i.d.u.Threads$3(newThread):288 Creating thread debezium-mongodbconnector-finaloop-prod-change-event-source-coordinator 2024-07-09 05:09:35 source > INFO pool-2-thread-1 i.a.c.i.d.i.DebeziumRecordPublisher$start$3(taskStarted):87 DebeziumEngine notify: task started 2024-07-09 05:09:35 source > INFO debezium-mongodbconnector-finaloop-prod-change-event-source-coordinator i.d.p.ChangeEventSourceCoordinator(lambda$start$0):134 Metrics registered 2024-07-09 05:09:35 source > INFO debezium-mongodbconnector-finaloop-prod-change-event-source-coordinator i.d.p.ChangeEventSourceCoordinator(lambda$start$0):137 Context created 2024-07-09 05:09:35 source > INFO debezium-mongodbconnector-finaloop-prod-change-event-source-coordinator i.d.c.m.MongoDbSnapshotChangeEventSource(getSnapshottingTask):141 A previous offset indicating a completed snapshot has been found. 2024-07-09 05:09:35 source > INFO debezium-mongodbconnector-finaloop-prod-change-event-source-coordinator i.d.c.m.MongoDbSnapshotChangeEventSource(getSnapshottingTask):148 According to the connector configuration, no snapshot will occur. 2024-07-09 05:09:35 source > INFO debezium-mongodbconnector-finaloop-prod-change-event-source-coordinator i.d.p.ChangeEventSourceCoordinator(doSnapshot):254 Snapshot ended with SnapshotResult [status=SKIPPED, offset=MongoDbOffsetContext [sourceInfo=SourceInfo [initialSync=false, collectionId=null, position=Position [ts=Timestamp{value=7389446762676617477, seconds=1720489646, inc=261}, changeStreamSessionTxnId=null, resumeToken=82668C96AE000001052B022C0100296E5A100414EB565DFAA44C2B9609085790ED971A46645F69640064668C96AEBCF2CBEDF3BA05160004]]]] 2024-07-09 05:09:35 source > INFO debezium-mongodbconnector-finaloop-prod-change-event-source-coordinator i.d.p.ChangeEventSourceCoordinator(streamingConnected):433 Connected metrics set to 'true' 2024-07-09 05:09:35 source > INFO debezium-mongodbconnector-finaloop-prod-change-event-source-coordinator i.d.u.Threads(threadFactory):271 Requested thread factory for connector MongoDbConnector, id = mongodb named = incremental-snapshot 2024-07-09 05:09:35 source > INFO debezium-mongodbconnector-finaloop-prod-change-event-source-coordinator i.d.c.CommonConnectorConfig(getSourceInfoStructMaker):1649 Loading the custom source info struct maker plugin: io.debezium.connector.mongodb.MongoDbSourceInfoStructMaker 2024-07-09 05:09:35 source > INFO debezium-mongodbconnector-finaloop-prod-change-event-source-coordinator i.d.c.CommonConnectorConfig(getSourceInfoStructMaker):1649 Loading the custom source info struct maker plugin: io.debezium.connector.mongodb.MongoDbSourceInfoStructMaker 2024-07-09 05:09:35 source > INFO debezium-mongodbconnector-finaloop-prod-change-event-source-coordinator i.d.c.CommonConnectorConfig(getSourceInfoStructMaker):1649 Loading the custom source info struct maker plugin: io.debezium.connector.mongodb.MongoDbSourceInfoStructMaker 2024-07-09 05:09:35 source > INFO debezium-mongodbconnector-finaloop-prod-change-event-source-coordinator i.d.c.CommonConnectorConfig(getSourceInfoStructMaker):1649 Loading the custom source info struct maker plugin: io.debezium.connector.mongodb.MongoDbSourceInfoStructMaker 2024-07-09 05:09:35 source > INFO debezium-mongodbconnector-finaloop-prod-change-event-source-coordinator i.d.c.m.s.MongoDbIncrementalSnapshotChangeEventSource(init):260 No incremental snapshot in progress, no action needed on start 2024-07-09 05:09:35 source > INFO debezium-mongodbconnector-finaloop-prod-change-event-source-coordinator i.d.p.s.SignalProcessor(start):105 SignalProcessor started. Scheduling it every 5000ms 2024-07-09 05:09:35 source > INFO debezium-mongodbconnector-finaloop-prod-change-event-source-coordinator i.d.u.Threads$3(newThread):288 Creating thread debezium-mongodbconnector-finaloop-prod-SignalProcessor 2024-07-09 05:09:35 source > INFO debezium-mongodbconnector-finaloop-prod-change-event-source-coordinator i.d.p.ChangeEventSourceCoordinator(streamEvents):279 Starting streaming 2024-07-09 05:09:35 source > INFO debezium-mongodbconnector-finaloop-prod-change-event-source-coordinator i.d.c.CommonConnectorConfig(getSourceInfoStructMaker):1649 Loading the custom source info struct maker plugin: io.debezium.connector.mongodb.MongoDbSourceInfoStructMaker 2024-07-09 05:09:35 source > INFO debezium-mongodbconnector-finaloop-prod-change-event-source-coordinator i.d.c.CommonConnectorConfig(getSourceInfoStructMaker):1649 Loading the custom source info struct maker plugin: io.debezium.connector.mongodb.MongoDbSourceInfoStructMaker 2024-07-09 05:09:35 source > INFO debezium-mongodbconnector-finaloop-prod-change-event-source-coordinator i.d.c.CommonConnectorConfig(getSourceInfoStructMaker):1649 Loading the custom source info struct maker plugin: io.debezium.connector.mongodb.MongoDbSourceInfoStructMaker 2024-07-09 05:09:35 source > INFO debezium-mongodbconnector-finaloop-prod-change-event-source-coordinator i.d.c.CommonConnectorConfig(getSourceInfoStructMaker):1649 Loading the custom source info struct maker plugin: io.debezium.connector.mongodb.MongoDbSourceInfoStructMaker 2024-07-09 05:09:35 source > INFO debezium-mongodbconnector-finaloop-prod-change-event-source-coordinator c.m.i.d.l.SLF4JLogger(info):71 MongoClient with metadata {"driver": {"name": "mongo-java-driver|sync", "version": "4.11.0"}, "os": {"type": "Linux", "name": "Linux", "architecture": "amd64", "version": "5.10.217-205.860.amzn2.x86_64"}, "platform": "Java/Amazon.com Inc./21.0.3+9-LTS"} created with settings MongoClientSettings{readPreference=primary, writeConcern=WriteConcern{w=null, wTimeout=null ms, journal=null}, retryWrites=true, retryReads=true, readConcern=ReadConcern{level=null}, credential=MongoCredential{mechanism=null, userName='AIRBYTE_USER', source='admin', password=, mechanismProperties=}, transportSettings=null, streamFactoryFactory=null, commandListeners=[], codecRegistry=ProvidersCodecRegistry{codecProviders=[ValueCodecProvider{}, BsonValueCodecProvider{}, DBRefCodecProvider{}, DBObjectCodecProvider{}, DocumentCodecProvider{}, CollectionCodecProvider{}, IterableCodecProvider{}, MapCodecProvider{}, GeoJsonCodecProvider{}, GridFSFileCodecProvider{}, Jsr310CodecProvider{}, JsonObjectCodecProvider{}, BsonCodecProvider{}, EnumCodecProvider{}, com.mongodb.client.model.mql.ExpressionCodecProvider@7c2327fa, com.mongodb.Jep395RecordCodecProvider@4d847d32, com.mongodb.KotlinCodecProvider@5f462e3b]}, loggerSettings=LoggerSettings{maxDocumentLength=1000}, clusterSettings={hosts=[127.0.0.1:27017], srvHost=finaloop-prod-pl-0.xnbzo.mongodb.net, srvServiceName=mongodb, mode=MULTIPLE, requiredClusterType=REPLICA_SET, requiredReplicaSetName='atlas-elry3t-shard-0', serverSelector='null', clusterListeners='[]', serverSelectionTimeout='30000 ms', localThreshold='15 ms'}, socketSettings=SocketSettings{connectTimeoutMS=10000, readTimeoutMS=0, receiveBufferSize=0, proxySettings=ProxySettings{host=null, port=null, username=null, password=null}}, heartbeatSocketSettings=SocketSettings{connectTimeoutMS=10000, readTimeoutMS=10000, receiveBufferSize=0, proxySettings=ProxySettings{host=null, port=null, username=null, password=null}}, connectionPoolSettings=ConnectionPoolSettings{maxSize=100, minSize=0, maxWaitTimeMS=120000, maxConnectionLifeTimeMS=0, maxConnectionIdleTimeMS=0, maintenanceInitialDelayMS=0, maintenanceFrequencyMS=60000, connectionPoolListeners=[], maxConnecting=2}, serverSettings=ServerSettings{heartbeatFrequencyMS=10000, minHeartbeatFrequencyMS=500, serverListeners='[]', serverMonitorListeners='[]'}, sslSettings=SslSettings{enabled=true, invalidHostNameAllowed=false, context=javax.net.ssl.SSLContext@45d0dad5}, applicationName='null', compressorList=[], uuidRepresentation=STANDARD, serverApi=null, autoEncryptionSettings=null, dnsClient=null, inetAddressResolver=null, contextProvider=null} 2024-07-09 05:09:35 source > INFO debezium-mongodbconnector-finaloop-prod-change-event-source-coordinator i.d.c.m.MongoDbStreamingChangeEventSource(readChangeStream):101 Reading change stream 2024-07-09 05:09:35 source > INFO cluster-ClusterId{value='668cc60f4ad176664dffe562', description='null'}-srv-finaloop-prod-pl-0.xnbzo.mongodb.net c.m.i.d.l.SLF4JLogger(info):71 Adding discovered server pl-0-us-east-1.xnbzo.mongodb.net:1026 to client view of cluster 2024-07-09 05:09:35 source > INFO debezium-mongodbconnector-finaloop-prod-change-event-source-coordinator i.d.c.m.ChangeStreamPipelineFactory(create):56 Effective change stream pipeline: [{"$replaceRoot": {"newRoot": {"event": "$$ROOT", "namespace": {"$concat": ["$ns.db", ".", "$ns.coll"]}}}}, {"$match": {"$and": [{"$and": [{"event.ns.db": {"$regularExpression": {"pattern": "finaloop-prod", "options": "i"}}}, {"namespace": {"$regularExpression": {"pattern": "finaloop-prod\\.amazonorders", "options": "i"}}}]}, {"event.operationType": {"$in": ["insert", "update", "replace", "delete"]}}]}}, {"$replaceRoot": {"newRoot": "$event"}}] 2024-07-09 05:09:35 source > INFO debezium-mongodbconnector-finaloop-prod-change-event-source-coordinator i.d.c.m.MongoUtils(openChangeStream):219 Change stream is restricted to 'finaloop-prod' database 2024-07-09 05:09:35 source > INFO cluster-ClusterId{value='668cc60f4ad176664dffe562', description='null'}-srv-finaloop-prod-pl-0.xnbzo.mongodb.net c.m.i.d.l.SLF4JLogger(info):71 Adding discovered server pl-0-us-east-1.xnbzo.mongodb.net:1025 to client view of cluster 2024-07-09 05:09:35 source > INFO cluster-ClusterId{value='668cc60f4ad176664dffe562', description='null'}-srv-finaloop-prod-pl-0.xnbzo.mongodb.net c.m.i.d.l.SLF4JLogger(info):71 Adding discovered server pl-0-us-east-1.xnbzo.mongodb.net:1024 to client view of cluster 2024-07-09 05:09:35 source > INFO debezium-mongodbconnector-finaloop-prod-change-event-source-coordinator i.d.c.m.MongoDbStreamingChangeEventSource(initChangeStream):206 Resuming streaming from token '82668C96AE000001052B022C0100296E5A100414EB565DFAA44C2B9609085790ED971A46645F69640064668C96AEBCF2CBEDF3BA05160004' 2024-07-09 05:09:35 source > INFO debezium-mongodbconnector-finaloop-prod-change-event-source-coordinator i.d.u.Threads(threadFactory):271 Requested thread factory for connector MongoDbConnector, id = finaloop-prod named = replicator-fetcher 2024-07-09 05:09:35 source > INFO debezium-mongodbconnector-finaloop-prod-change-event-source-coordinator i.d.c.m.e.BufferingChangeStreamCursor(start):324 Fetcher submitted for execution: io.debezium.connector.mongodb.events.BufferingChangeStreamCursor$EventFetcher@1e21ac82 @ java.util.concurrent.ThreadPoolExecutor@6513d974[Running, pool size = 0, active threads = 0, queued tasks = 0, completed tasks = 0] 2024-07-09 05:09:35 source > INFO debezium-mongodbconnector-finaloop-prod-change-event-source-coordinator i.d.u.Threads$3(newThread):288 Creating thread debezium-mongodbconnector-finaloop-prod-replicator-fetcher-0 2024-07-09 05:09:35 source > INFO debezium-mongodbconnector-finaloop-prod-replicator-fetcher-0 c.m.i.d.l.SLF4JLogger(info):71 No server chosen by ReadPreferenceServerSelector{readPreference=primary} from cluster description ClusterDescription{type=REPLICA_SET, connectionMode=MULTIPLE, serverDescriptions=[ServerDescription{address=pl-0-us-east-1.xnbzo.mongodb.net:1026, type=UNKNOWN, state=CONNECTING}, ServerDescription{address=pl-0-us-east-1.xnbzo.mongodb.net:1025, type=UNKNOWN, state=CONNECTING}, ServerDescription{address=pl-0-us-east-1.xnbzo.mongodb.net:1024, type=UNKNOWN, state=CONNECTING}]}. Waiting for 30000 ms before timing out 2024-07-09 05:09:35 source > INFO cluster-ClusterId{value='668cc60f4ad176664dffe562', description='null'}-pl-0-us-east-1.xnbzo.mongodb.net:1025 c.m.i.d.l.SLF4JLogger(info):71 Monitor thread successfully connected to server with description ServerDescription{address=pl-0-us-east-1.xnbzo.mongodb.net:1025, type=REPLICA_SET_PRIMARY, state=CONNECTED, ok=true, minWireVersion=0, maxWireVersion=17, maxDocumentSize=16777216, logicalSessionTimeoutMinutes=30, roundTripTimeNanos=34093872, setName='atlas-elry3t-shard-0', canonicalAddress=pl-0-us-east-1.xnbzo.mongodb.net:1025, hosts=[pl-0-us-east-1.xnbzo.mongodb.net:1025, pl-0-us-east-1.xnbzo.mongodb.net:1026, pl-0-us-east-1.xnbzo.mongodb.net:1024], passives=[], arbiters=[], primary='pl-0-us-east-1.xnbzo.mongodb.net:1025', tagSet=TagSet{[Tag{name='availabilityZone', value='use1-az4'}, Tag{name='diskState', value='READY'}, Tag{name='nodeType', value='ELECTABLE'}, Tag{name='provider', value='AWS'}, Tag{name='region', value='US_EAST_1'}, Tag{name='workloadType', value='OPERATIONAL'}]}, electionId=7fffffff0000000000000070, setVersion=4, topologyVersion=TopologyVersion{processId=66801612bcf2cbedf3003b29, counter=6}, lastWriteDate=Tue Jul 09 05:09:35 UTC 2024, lastUpdateTimeNanos=2789335504478077} 2024-07-09 05:09:35 source > INFO cluster-ClusterId{value='668cc60f4ad176664dffe562', description='null'}-pl-0-us-east-1.xnbzo.mongodb.net:1026 c.m.i.d.l.SLF4JLogger(info):71 Monitor thread successfully connected to server with description ServerDescription{address=pl-0-us-east-1.xnbzo.mongodb.net:1026, type=REPLICA_SET_SECONDARY, state=CONNECTED, ok=true, minWireVersion=0, maxWireVersion=17, maxDocumentSize=16777216, logicalSessionTimeoutMinutes=30, roundTripTimeNanos=33804642, setName='atlas-elry3t-shard-0', canonicalAddress=pl-0-us-east-1.xnbzo.mongodb.net:1026, hosts=[pl-0-us-east-1.xnbzo.mongodb.net:1025, pl-0-us-east-1.xnbzo.mongodb.net:1026, pl-0-us-east-1.xnbzo.mongodb.net:1024], passives=[], arbiters=[], primary='pl-0-us-east-1.xnbzo.mongodb.net:1025', tagSet=TagSet{[Tag{name='availabilityZone', value='use1-az5'}, Tag{name='diskState', value='READY'}, Tag{name='nodeType', value='ELECTABLE'}, Tag{name='provider', value='AWS'}, Tag{name='region', value='US_EAST_1'}, Tag{name='workloadType', value='OPERATIONAL'}]}, electionId=null, setVersion=4, topologyVersion=TopologyVersion{processId=668016ba8d3144b6ca265470, counter=3}, lastWriteDate=Tue Jul 09 05:09:35 UTC 2024, lastUpdateTimeNanos=2789335504597221} 2024-07-09 05:09:35 source > INFO cluster-ClusterId{value='668cc60f4ad176664dffe562', description='null'}-pl-0-us-east-1.xnbzo.mongodb.net:1025 c.m.i.d.l.SLF4JLogger(info):71 Discovered replica set primary pl-0-us-east-1.xnbzo.mongodb.net:1025 with max election id 7fffffff0000000000000070 and max set version 4 2024-07-09 05:09:35 source > INFO cluster-ClusterId{value='668cc60f4ad176664dffe562', description='null'}-pl-0-us-east-1.xnbzo.mongodb.net:1024 c.m.i.d.l.SLF4JLogger(info):71 Monitor thread successfully connected to server with description ServerDescription{address=pl-0-us-east-1.xnbzo.mongodb.net:1024, type=REPLICA_SET_SECONDARY, state=CONNECTED, ok=true, minWireVersion=0, maxWireVersion=17, maxDocumentSize=16777216, logicalSessionTimeoutMinutes=30, roundTripTimeNanos=34271887, setName='atlas-elry3t-shard-0', canonicalAddress=pl-0-us-east-1.xnbzo.mongodb.net:1024, hosts=[pl-0-us-east-1.xnbzo.mongodb.net:1025, pl-0-us-east-1.xnbzo.mongodb.net:1026, pl-0-us-east-1.xnbzo.mongodb.net:1024], passives=[], arbiters=[], primary='pl-0-us-east-1.xnbzo.mongodb.net:1025', tagSet=TagSet{[Tag{name='availabilityZone', value='use1-az6'}, Tag{name='diskState', value='READY'}, Tag{name='nodeType', value='ELECTABLE'}, Tag{name='provider', value='AWS'}, Tag{name='region', value='US_EAST_1'}, Tag{name='workloadType', value='OPERATIONAL'}]}, electionId=null, setVersion=4, topologyVersion=TopologyVersion{processId=668015533bce4fadbd70ae1c, counter=4}, lastWriteDate=Tue Jul 09 05:09:35 UTC 2024, lastUpdateTimeNanos=2789335505268262} 2024-07-09 05:09:40 destination > INFO pool-6-thread-1 i.a.c.i.d.a.b.BufferManager(printQueueInfo):94 [ASYNC QUEUE INFO] Global: max: 22.05 GB, allocated: 10 MB (10.0 MB), %% used: 4.429482636428065E-4 | State Manager memory usage: Allocated: 10 MB, Used: 0 bytes, percentage Used 0.0 2024-07-09 05:09:40 destination > INFO pool-9-thread-1 i.a.c.i.d.a.FlushWorkers(printWorkerInfo):127 [ASYNC WORKER INFO] Pool queue size: 0, Active threads: 0 2024-07-09 05:10:40 destination > INFO pool-6-thread-1 i.a.c.i.d.a.b.BufferManager(printQueueInfo):94 [ASYNC QUEUE INFO] Global: max: 22.05 GB, allocated: 10 MB (10.0 MB), %% used: 4.429482636428065E-4 | State Manager memory usage: Allocated: 10 MB, Used: 0 bytes, percentage Used 0.0 2024-07-09 05:10:40 destination > INFO pool-9-thread-1 i.a.c.i.d.a.FlushWorkers(printWorkerInfo):127 [ASYNC WORKER INFO] Pool queue size: 0, Active threads: 0 2024-07-09 05:11:26 source > INFO main i.a.c.i.d.i.DebeziumRecordIterator(requestClose):204 No records were returned by Debezium in the timeout seconds 300, closing the engine and iterator 2024-07-09 05:11:26 source > INFO main i.d.e.EmbeddedEngine(stop):957 Stopping the embedded engine 2024-07-09 05:11:26 source > INFO main i.d.e.EmbeddedEngine(stop):964 Waiting for PT5M for connector to stop 2024-07-09 05:11:27 source > INFO pool-2-thread-1 i.d.e.EmbeddedEngine(stopTaskAndCommitOffset):765 Stopping the task and engine 2024-07-09 05:11:27 source > INFO pool-2-thread-1 i.d.c.c.BaseSourceTask(stop):406 Stopping down connector 2024-07-09 05:11:27 source > INFO debezium-mongodbconnector-finaloop-prod-change-event-source-coordinator i.d.c.m.e.BufferingChangeStreamCursor(close):403 Awaiting fetcher thread termination 2024-07-09 05:11:40 destination > INFO pool-6-thread-1 i.a.c.i.d.a.b.BufferManager(printQueueInfo):94 [ASYNC QUEUE INFO] Global: max: 22.05 GB, allocated: 10 MB (10.0 MB), %% used: 4.429482636428065E-4 | State Manager memory usage: Allocated: 10 MB, Used: 0 bytes, percentage Used 0.0 2024-07-09 05:11:40 destination > INFO pool-9-thread-1 i.a.c.i.d.a.FlushWorkers(printWorkerInfo):127 [ASYNC WORKER INFO] Pool queue size: 0, Active threads: 0 2024-07-09 05:11:57 source > INFO debezium-mongodbconnector-finaloop-prod-change-event-source-coordinator i.d.p.ChangeEventSourceCoordinator(streamEvents):281 Finished streaming 2024-07-09 05:11:57 source > INFO debezium-mongodbconnector-finaloop-prod-change-event-source-coordinator i.d.p.ChangeEventSourceCoordinator(streamingConnected):433 Connected metrics set to 'false' 2024-07-09 05:11:57 source > INFO pool-2-thread-1 i.d.p.s.SignalProcessor(stop):127 SignalProcessor stopped 2024-07-09 05:11:57 source > INFO pool-2-thread-1 i.d.s.DefaultServiceRegistry(close):105 Debezium ServiceRegistry stopped. 2024-07-09 05:11:57 source > INFO pool-2-thread-1 i.a.c.i.d.i.DebeziumRecordPublisher$start$3(taskStopped):91 DebeziumEngine notify: task stopped 2024-07-09 05:11:57 source > INFO pool-2-thread-1 o.a.k.c.s.FileOffsetBackingStore(stop):71 Stopped FileOffsetBackingStore 2024-07-09 05:11:57 source > INFO pool-2-thread-1 i.d.c.m.MongoDbConnector(stop):82 Stopping MongoDB connector 2024-07-09 05:11:57 source > INFO pool-2-thread-1 i.d.c.m.MongoDbConnector(stop):86 Stopped MongoDB connector 2024-07-09 05:11:57 source > INFO pool-2-thread-1 i.a.c.i.d.i.DebeziumRecordPublisher$start$3(connectorStopped):83 DebeziumEngine notify: connector stopped 2024-07-09 05:11:57 source > INFO pool-2-thread-1 i.a.c.i.d.i.DebeziumRecordPublisher(start$lambda$1):60 Debezium engine shutdown. Engine terminated successfully : true 2024-07-09 05:11:57 source > INFO pool-2-thread-1 i.a.c.i.d.i.DebeziumRecordPublisher(start$lambda$1):63 Connector 'io.debezium.connector.mongodb.MongoDbConnector' completed normally. 2024-07-09 05:11:57 source > INFO main i.a.c.i.d.i.DebeziumRecordIterator(computeNext):88 no record found. polling again. 2024-07-09 05:11:57 source > INFO main i.a.i.s.m.c.MongoDbCdcStateHandler(saveState):36 Saving Debezium state MongoDbCdcState[state={"[\"finaloop-prod\",{\"server_id\":\"finaloop-prod\"}]":"{\"sec\":1720489646,\"ord\":261,\"resume_token\":\"82668C96AE000001052B022C0100296E5A100414EB565DFAA44C2B9609085790ED971A46645F69640064668C96AEBCF2CBEDF3BA05160004\"}"}, schema_enforced=true]... 2024-07-09 05:11:57 source > WARN main i.a.c.i.b.IntegrationRunner$Companion(stopOrphanedThreads):486 The main thread is exiting while children non-daemon threads from a connector are still active. Ideally, this situation should not happen... Please check with maintainers if the connector or library code should safely clean up its threads before quitting instead. The main thread is: main (RUNNABLE) Thread stacktrace: java.base/java.lang.Thread.getStackTrace(Thread.java:2450) at io.airbyte.cdk.integrations.base.IntegrationRunner$Companion.dumpThread(IntegrationRunner.kt:544) at io.airbyte.cdk.integrations.base.IntegrationRunner$Companion.access$dumpThread(IntegrationRunner.kt:371) at io.airbyte.cdk.integrations.base.IntegrationRunner$Companion$stopOrphanedThreads$1.invoke(IntegrationRunner.kt:491) at io.github.oshai.kotlinlogging.internal.MessageInvokerKt.toStringSafe(MessageInvoker.kt:5) at io.github.oshai.kotlinlogging.slf4j.internal.LocationAwareKLogger$warn$1.invoke(LocationAwareKLogger.kt:191) at io.github.oshai.kotlinlogging.slf4j.internal.LocationAwareKLogger$warn$1.invoke(LocationAwareKLogger.kt:191) at io.github.oshai.kotlinlogging.slf4j.internal.LocationAwareKLogger.at(LocationAwareKLogger.kt:43) at io.github.oshai.kotlinlogging.slf4j.internal.LocationAwareKLogger.warn(LocationAwareKLogger.kt:191) at io.airbyte.cdk.integrations.base.IntegrationRunner$Companion.stopOrphanedThreads(IntegrationRunner.kt:486) at io.airbyte.cdk.integrations.base.IntegrationRunner$Companion.stopOrphanedThreads$default(IntegrationRunner.kt:475) at io.airbyte.cdk.integrations.base.IntegrationRunner.readSerial(IntegrationRunner.kt:338) at io.airbyte.cdk.integrations.base.IntegrationRunner.runInternal(IntegrationRunner.kt:184) at io.airbyte.cdk.integrations.base.IntegrationRunner.run(IntegrationRunner.kt:116) at io.airbyte.integrations.source.mongodb.MongoDbSource.main(MongoDbSource.java:52) 2024-07-09 05:11:57 source > WARN main i.a.c.i.b.IntegrationRunner$Companion(stopOrphanedThreads):509 Active non-daemon thread: debezium-mongodbconnector-finaloop-prod-replicator-fetcher-0 (RUNNABLE) Thread stacktrace: java.base/sun.nio.ch.Net.poll(Native Method) at java.base/sun.nio.ch.NioSocketImpl.park(NioSocketImpl.java:191) at java.base/sun.nio.ch.NioSocketImpl.park(NioSocketImpl.java:201) at java.base/sun.nio.ch.NioSocketImpl.implRead(NioSocketImpl.java:309) at java.base/sun.nio.ch.NioSocketImpl.read(NioSocketImpl.java:346) at java.base/sun.nio.ch.NioSocketImpl$1.read(NioSocketImpl.java:796) at java.base/java.net.Socket$SocketInputStream.read(Socket.java:1099) at java.base/sun.security.ssl.SSLSocketInputRecord.read(SSLSocketInputRecord.java:489) at java.base/sun.security.ssl.SSLSocketInputRecord.readHeader(SSLSocketInputRecord.java:483) at java.base/sun.security.ssl.SSLSocketInputRecord.bytesInCompletePacket(SSLSocketInputRecord.java:70) at java.base/sun.security.ssl.SSLSocketImpl.readApplicationRecord(SSLSocketImpl.java:1461) at java.base/sun.security.ssl.SSLSocketImpl$AppInputStream.read(SSLSocketImpl.java:1066) at com.mongodb.internal.connection.SocketStream.read(SocketStream.java:175) at com.mongodb.internal.connection.SocketStream.read(SocketStream.java:200) at com.mongodb.internal.connection.InternalStreamConnection.receiveResponseBuffers(InternalStreamConnection.java:739) at com.mongodb.internal.connection.InternalStreamConnection.receiveMessageWithAdditionalTimeout(InternalStreamConnection.java:603) at com.mongodb.internal.connection.InternalStreamConnection.receiveCommandMessageResponse(InternalStreamConnection.java:451) at com.mongodb.internal.connection.InternalStreamConnection.sendAndReceive(InternalStreamConnection.java:372) at com.mongodb.internal.connection.UsageTrackingInternalConnection.sendAndReceive(UsageTrackingInternalConnection.java:114) at com.mongodb.internal.connection.DefaultConnectionPool$PooledConnection.sendAndReceive(DefaultConnectionPool.java:765) at com.mongodb.internal.connection.CommandProtocolImpl.execute(CommandProtocolImpl.java:76) at com.mongodb.internal.connection.DefaultServer$DefaultServerProtocolExecutor.execute(DefaultServer.java:209) at com.mongodb.internal.connection.DefaultServerConnection.executeProtocol(DefaultServerConnection.java:115) at com.mongodb.internal.connection.DefaultServerConnection.command(DefaultServerConnection.java:83) at com.mongodb.internal.connection.DefaultServerConnection.command(DefaultServerConnection.java:74) at com.mongodb.internal.connection.DefaultServer$OperationCountTrackingConnection.command(DefaultServer.java:299) at com.mongodb.internal.operation.SyncOperationHelper.createReadCommandAndExecute(SyncOperationHelper.java:273) at com.mongodb.internal.operation.SyncOperationHelper.lambda$executeRetryableRead$3(SyncOperationHelper.java:191) at com.mongodb.internal.operation.SyncOperationHelper.lambda$withSourceAndConnection$0(SyncOperationHelper.java:127) at com.mongodb.internal.operation.SyncOperationHelper.withSuppliedResource(SyncOperationHelper.java:152) at com.mongodb.internal.operation.SyncOperationHelper.lambda$withSourceAndConnection$1(SyncOperationHelper.java:126) at com.mongodb.internal.operation.SyncOperationHelper.withSuppliedResource(SyncOperationHelper.java:152) at com.mongodb.internal.operation.SyncOperationHelper.withSourceAndConnection(SyncOperationHelper.java:125) at com.mongodb.internal.operation.SyncOperationHelper.lambda$executeRetryableRead$4(SyncOperationHelper.java:189) at com.mongodb.internal.operation.SyncOperationHelper.lambda$decorateReadWithRetries$12(SyncOperationHelper.java:292) at com.mongodb.internal.async.function.RetryingSyncSupplier.get(RetryingSyncSupplier.java:67) at com.mongodb.internal.operation.SyncOperationHelper.executeRetryableRead(SyncOperationHelper.java:194) at com.mongodb.internal.operation.SyncOperationHelper.executeRetryableRead(SyncOperationHelper.java:176) at com.mongodb.internal.operation.AggregateOperationImpl.execute(AggregateOperationImpl.java:193) at com.mongodb.internal.operation.ChangeStreamOperation.lambda$execute$0(ChangeStreamOperation.java:187) at com.mongodb.internal.operation.SyncOperationHelper.withReadConnectionSource(SyncOperationHelper.java:99) at com.mongodb.internal.operation.ChangeStreamOperation.execute(ChangeStreamOperation.java:185) at com.mongodb.internal.operation.ChangeStreamOperation.execute(ChangeStreamOperation.java:54) at com.mongodb.client.internal.MongoClientDelegate$DelegateOperationExecutor.execute(MongoClientDelegate.java:153) at com.mongodb.client.internal.ChangeStreamIterableImpl.execute(ChangeStreamIterableImpl.java:212) at com.mongodb.client.internal.ChangeStreamIterableImpl.cursor(ChangeStreamIterableImpl.java:187) at io.debezium.connector.mongodb.events.BufferingChangeStreamCursor$EventFetcher.run(BufferingChangeStreamCursor.java:221) at java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:572) at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:317) at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1144) at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:642) at java.base/java.lang.Thread.run(Thread.java:1583) creationStack=creationStack=java.base/java.lang.Thread.getStackTrace(Thread.java:2450) io.airbyte.cdk.integrations.base.IntegrationRunner$ThreadCreationInfo.(IntegrationRunner.kt:364) io.airbyte.cdk.integrations.base.IntegrationRunner$Companion$threadCreationInfo$1.childValue(IntegrationRunner.kt:375) io.airbyte.cdk.integrations.base.IntegrationRunner$Companion$threadCreationInfo$1.childValue(IntegrationRunner.kt:373) java.base/java.lang.ThreadLocal$ThreadLocalMap.(ThreadLocal.java:469) java.base/java.lang.ThreadLocal.createInheritedMap(ThreadLocal.java:328) java.base/java.lang.Thread.(Thread.java:749) java.base/java.lang.Thread.(Thread.java:1275) io.debezium.util.Threads$3.newThread(Threads.java:289) java.base/java.util.concurrent.ThreadPoolExecutor$Worker.(ThreadPoolExecutor.java:637) java.base/java.util.concurrent.ThreadPoolExecutor.addWorker(ThreadPoolExecutor.java:928) java.base/java.util.concurrent.ThreadPoolExecutor.execute(ThreadPoolExecutor.java:1364) java.base/java.util.concurrent.AbstractExecutorService.submit(AbstractExecutorService.java:123) io.debezium.connector.mongodb.events.BufferingChangeStreamCursor.start(BufferingChangeStreamCursor.java:325) io.debezium.connector.mongodb.MongoDbStreamingChangeEventSource.readChangeStream(MongoDbStreamingChangeEventSource.java:105) io.debezium.connector.mongodb.MongoDbStreamingChangeEventSource.lambda$execute$0(MongoDbStreamingChangeEventSource.java:86) io.debezium.connector.mongodb.connection.MongoDbConnection.lambda$execute$0(MongoDbConnection.java:89) io.debezium.connector.mongodb.connection.MongoDbConnection.execute(MongoDbConnection.java:105) io.debezium.connector.mongodb.connection.MongoDbConnection.execute(MongoDbConnection.java:88) io.debezium.connector.mongodb.MongoDbStreamingChangeEventSource.execute(MongoDbStreamingChangeEventSource.java:85) io.debezium.connector.mongodb.MongoDbStreamingChangeEventSource.execute(MongoDbStreamingChangeEventSource.java:38) io.debezium.pipeline.ChangeEventSourceCoordinator.streamEvents(ChangeEventSourceCoordinator.java:280) io.debezium.pipeline.ChangeEventSourceCoordinator.executeChangeEventSources(ChangeEventSourceCoordinator.java:197) io.debezium.pipeline.ChangeEventSourceCoordinator.lambda$start$0(ChangeEventSourceCoordinator.java:140) java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:572) java.base/java.util.concurrent.FutureTask.run(FutureTask.java:317) java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1144) java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:642) java.base/java.lang.Thread.run(Thread.java:1583) creationTime=2024-07-09T05:09:35.464091396Z 2024-07-09 05:11:57 source > INFO main i.a.c.i.b.IntegrationRunner(runInternal):268 Completed integration: io.airbyte.integrations.source.mongodb.MongoDbSource 2024-07-09 05:11:57 source > INFO main i.a.i.s.m.MongoDbSource(main):53 completed source: class io.airbyte.integrations.source.mongodb.MongoDbSource 2024-07-09 05:12:40 destination > INFO pool-6-thread-1 i.a.c.i.d.a.b.BufferManager(printQueueInfo):94 [ASYNC QUEUE INFO] Global: max: 22.05 GB, allocated: 10 MB (10.0 MB), %% used: 4.429482636428065E-4 | State Manager memory usage: Allocated: 10 MB, Used: 0 bytes, percentage Used 0.0 2024-07-09 05:12:40 destination > INFO pool-9-thread-1 i.a.c.i.d.a.FlushWorkers(printWorkerInfo):127 [ASYNC WORKER INFO] Pool queue size: 0, Active threads: 0 2024-07-09 05:13:40 destination > INFO pool-6-thread-1 i.a.c.i.d.a.b.BufferManager(printQueueInfo):94 [ASYNC QUEUE INFO] Global: max: 22.05 GB, allocated: 10 MB (10.0 MB), %% used: 4.429482636428065E-4 | State Manager memory usage: Allocated: 10 MB, Used: 0 bytes, percentage Used 0.0 2024-07-09 05:13:40 destination > INFO pool-9-thread-1 i.a.c.i.d.a.FlushWorkers(printWorkerInfo):127 [ASYNC WORKER INFO] Pool queue size: 0, Active threads: 0 2024-07-09 05:13:57 source > ERROR pool-3-thread-1 i.a.c.i.b.IntegrationRunner$Companion(stopOrphanedThreads$lambda$5):527 Failed to interrupt children non-daemon threads, forcefully exiting NOW... 2024-07-09 05:13:59 replication-orchestrator > (pod: airbyte / source-mongodb-v2-read-71651-4-afsnd) - Closed all resources for pod 2024-07-09 05:13:59 replication-orchestrator > readFromSource: source exception io.airbyte.workers.internal.exception.SourceException: Source process exited with non-zero exit code 2 at io.airbyte.workers.general.BufferedReplicationWorker.readFromSource(BufferedReplicationWorker.java:365) ~[io.airbyte-airbyte-commons-worker-0.50.38.jar:?] at io.airbyte.workers.general.BufferedReplicationWorker.lambda$runAsyncWithHeartbeatCheck$3(BufferedReplicationWorker.java:235) ~[io.airbyte-airbyte-commons-worker-0.50.38.jar:?] at java.util.concurrent.CompletableFuture$AsyncRun.run(CompletableFuture.java:1804) ~[?:?] at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1144) ~[?:?] at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:642) ~[?:?] at java.lang.Thread.run(Thread.java:1583) ~[?:?] 2024-07-09 05:13:59 replication-orchestrator > readFromSource: done. (source.isFinished:true, fromSource.isClosed:false) 2024-07-09 05:13:59 replication-orchestrator > processMessage: done. (fromSource.isDone:true, forDest.isClosed:false) 2024-07-09 05:13:59 replication-orchestrator > writeToDestination: done. (forDest.isDone:true, isDestRunning:true) 2024-07-09 05:13:59 replication-orchestrator > thread status... timeout thread: false , replication thread: true 2024-07-09 05:13:59 replication-orchestrator > Attempt 0 to update stream status incomplete finaloop-prod:amazonorders 2024-07-09 05:13:59 destination > INFO main i.a.c.i.b.IntegrationRunner$Companion(consumeWriteStream$io_airbyte_airbyte_cdk_java_airbyte_cdk_airbyte_cdk_core):445 Finished buffered read of input stream 2024-07-09 05:13:59 destination > INFO main i.a.c.i.d.a.FlushWorkers(close):193 Closing flush workers -- waiting for all buffers to flush 2024-07-09 05:13:59 destination > INFO main i.a.c.i.d.a.FlushWorkers(close):230 Closing flush workers -- all buffers flushed 2024-07-09 05:13:59 destination > INFO main i.a.c.i.d.a.s.GlobalAsyncStateManager(flushStates):153 Flushing states 2024-07-09 05:13:59 replication-orchestrator > starting state flush thread for connectionId 9017f30a-1d76-4594-8ba3-a8d9c66b3051 2024-07-09 05:13:59 destination > INFO main i.a.c.i.d.a.s.GlobalAsyncStateManager(flushStates):207 Flushing states complete 2024-07-09 05:13:59 destination > INFO main i.a.c.i.d.a.GlobalMemoryManager(free):78 Freeing 624 bytes.. 2024-07-09 05:13:59 destination > INFO main i.a.c.i.d.a.FlushWorkers(close):238 Closing flush workers -- supervisor shut down 2024-07-09 05:13:59 destination > INFO main i.a.c.i.d.a.FlushWorkers(close):240 Closing flush workers -- Starting worker pool shutdown.. 2024-07-09 05:13:59 destination > INFO main i.a.c.i.d.a.FlushWorkers(close):245 Closing flush workers -- workers shut down 2024-07-09 05:13:59 destination > INFO main i.a.c.i.d.a.b.BufferManager(close):73 Buffers cleared.. 2024-07-09 05:13:59 destination > INFO sync-operations-4 i.a.i.d.s.o.SnowflakeStorageOperation(cleanupStage):78 Cleaning up stage "airbyte_internal"."STAGING_raw__stream_mongo_amazonorders" 2024-07-09 05:13:59 destination > INFO sync-operations-4 i.a.i.b.d.o.AbstractStreamOperation(finalizeTable):154 Skipping typing and deduping for stream STAGING.mongo_amazonorders because it had no records during this sync and no unprocessed records from a previous sync. 2024-07-09 05:13:59 destination > INFO main i.a.i.b.d.o.DefaultSyncOperation(finalizeStreams):150 Cleaning up sync operation thread pools 2024-07-09 05:13:59 destination > INFO main i.a.c.i.d.a.AsyncStreamConsumer(close):185 class io.airbyte.cdk.integrations.destination.async.AsyncStreamConsumer closed 2024-07-09 05:13:59 destination > INFO main i.a.c.i.b.IntegrationRunner(runInternal):268 Completed integration: io.airbyte.integrations.destination.snowflake.SnowflakeDestination 2024-07-09 05:13:59 destination > INFO main i.a.c.i.b.a.AdaptiveDestinationRunner$Runner(run):69 Completed destination: io.airbyte.integrations.destination.snowflake.SnowflakeDestination 2024-07-09 05:14:02 replication-orchestrator > (pod: airbyte / destination-snowflake-write-71651-4-jzrha) - Closed all resources for pod 2024-07-09 05:14:02 replication-orchestrator > readFromDestination: done. (writeToDestFailed:false, dest.isFinished:true) 2024-07-09 05:14:02 replication-orchestrator > thread status... timeout thread: false , replication thread: true 2024-07-09 05:14:02 replication-orchestrator > sync summary: { "status" : "failed", "startTime" : 1720501524805, "endTime" : 1720502042138, "totalStats" : { "bytesEmitted" : 0, "destinationStateMessagesEmitted" : 1, "destinationWriteEndTime" : 1720502042130, "destinationWriteStartTime" : 1720501524812, "meanSecondsBeforeSourceStateMessageEmitted" : 0, "maxSecondsBeforeSourceStateMessageEmitted" : 1, "maxSecondsBetweenStateMessageEmittedandCommitted" : 121, "meanSecondsBetweenStateMessageEmittedandCommitted" : 121, "recordsEmitted" : 0, "replicationEndTime" : 1720502042136, "replicationStartTime" : 1720501524805, "sourceReadEndTime" : 0, "sourceReadStartTime" : 1720501524812, "sourceStateMessagesEmitted" : 1 }, "streamStats" : [ ], "performanceMetrics" : { "processFromSource" : { "elapsedTimeInNanos" : 61106815, "executionCount" : 3, "avgExecTimeInNanos" : 2.0368938333333332E7 }, "readFromSource" : { "elapsedTimeInNanos" : 511027306179, "executionCount" : 565479, "avgExecTimeInNanos" : 903706.9567198782 }, "processFromDest" : { "elapsedTimeInNanos" : 16384810, "executionCount" : 1, "avgExecTimeInNanos" : 1.638481E7 }, "writeToDest" : { "elapsedTimeInNanos" : 4052128, "executionCount" : 1, "avgExecTimeInNanos" : 4052128.0 }, "readFromDest" : { "elapsedTimeInNanos" : 514174401944, "executionCount" : 1206893, "avgExecTimeInNanos" : 426031.47250336193 } } } 2024-07-09 05:14:02 replication-orchestrator > failures: [ { "failureOrigin" : "source", "internalMessage" : "Source process exited with non-zero exit code 2", "externalMessage" : "Something went wrong within the source connector", "metadata" : { "attemptNumber" : 4, "jobId" : 71651, "connector_command" : "read" }, "stacktrace" : "io.airbyte.workers.internal.exception.SourceException: Source process exited with non-zero exit code 2\n\tat io.airbyte.workers.general.BufferedReplicationWorker.readFromSource(BufferedReplicationWorker.java:365)\n\tat io.airbyte.workers.general.BufferedReplicationWorker.lambda$runAsyncWithHeartbeatCheck$3(BufferedReplicationWorker.java:235)\n\tat java.base/java.util.concurrent.CompletableFuture$AsyncRun.run(CompletableFuture.java:1804)\n\tat java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1144)\n\tat java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:642)\n\tat java.base/java.lang.Thread.run(Thread.java:1583)\n", "timestamp" : 1720502039142 } ] 2024-07-09 05:14:02 replication-orchestrator > Returning output... 2024-07-09 05:14:02 replication-orchestrator > 2024-07-09 05:14:02 replication-orchestrator > ----- END REPLICATION ----- 2024-07-09 05:14:02 replication-orchestrator > 2024-07-09 05:14:02 replication-orchestrator > Writing async status SUCCEEDED for KubePodInfo[namespace=airbyte, name=orchestrator-repl-job-71651-attempt-4, mainContainerInfo=KubeContainerInfo[image=airbyte/container-orchestrator:0.50.38, pullPolicy=IfNotPresent]]... 2024-07-09 05:14:03 platform > State Store reports orchestrator pod orchestrator-repl-job-71651-attempt-4 succeeded 2024-07-09 05:14:03 platform > Retry State: RetryManager(completeFailureBackoffPolicy=BackoffPolicy(minInterval=PT10S, maxInterval=PT30M, base=3), partialFailureBackoffPolicy=null, successiveCompleteFailureLimit=5, totalCompleteFailureLimit=10, successivePartialFailureLimit=1000, totalPartialFailureLimit=10, successiveCompleteFailures=5, totalCompleteFailures=5, successivePartialFailures=0, totalPartialFailures=0) Backoff before next attempt: 13 minutes 30 seconds 2024-07-09 05:14:03 platform > Failing job: 71651, reason: Job failed after too many retries for connection 9017f30a-1d76-4594-8ba3-a8d9c66b3051