You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
During my previous tests, with 10 concurrent processes, each committing a million writes, it would not take more than 10 seconds. However, I don't understand why writing to the new cluster has become so slow now. This cluster only contains this single table and has plenty of resources. I hope to find out what the problem is.
Search before asking
Version
2.3.5
What's Wrong?
My Table
CREATE TABLE dwd_ess_big_cell_inc
(
time
datetime NOT NULL COMMENT '',namespace_code
VARCHAR(64) NOT NULL COMMENT '',device_instance_property_code
VARCHAR(64) NOT NULL COMMENT '',device_instance_code
VARCHAR(64) NOT NULL COMMENT '',value
VARCHAR(64) NULL COMMENT '',kafka_time
DATETIME NOT NULL COMMENT '创建时间',create_time
DATETIME NOT NULL DEFAULT CURRENT_TIMESTAMP COMMENT '创建时间') ENGINE = OLAP UNIQUE KEY(
time
,namespace_code
,device_instance_property_code
,device_instance_code
)COMMENT ''
PARTITION BY RANGE (time) ()
DISTRIBUTED BY HASH(
time
,namespace_code
,device_instance_property_code
,device_instance_code
)PROPERTIES
(
"min_load_replica_num" = "1",
"dynamic_partition.enable" = "true",
"dynamic_partition.time_unit" = "HOUR",
"dynamic_partition.start" = "-24",
"dynamic_partition.end" = "3",
"dynamic_partition.prefix" = "p",
"dynamic_partition.buckets" = "24",
"dynamic_partition.replication_num" = "3",
"compaction_policy" = "time_series",
"enable_unique_key_merge_on_write" = "false"
);
flink doris connector config
![image](https://private-user-images.githubusercontent.com/37656068/333172229-e53e2df2-78e4-4fd2-855a-209effd48de4.png?jwt=eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJpc3MiOiJnaXRodWIuY29tIiwiYXVkIjoicmF3LmdpdGh1YnVzZXJjb250ZW50LmNvbSIsImtleSI6ImtleTUiLCJleHAiOjE3MTk0MDkyMDAsIm5iZiI6MTcxOTQwODkwMCwicGF0aCI6Ii8zNzY1NjA2OC8zMzMxNzIyMjktZTUzZTJkZjItNzhlNC00ZmQyLTg1NWEtMjA5ZWZmZDQ4ZGU0LnBuZz9YLUFtei1BbGdvcml0aG09QVdTNC1ITUFDLVNIQTI1NiZYLUFtei1DcmVkZW50aWFsPUFLSUFWQ09EWUxTQTUzUFFLNFpBJTJGMjAyNDA2MjYlMkZ1cy1lYXN0LTElMkZzMyUyRmF3czRfcmVxdWVzdCZYLUFtei1EYXRlPTIwMjQwNjI2VDEzMzUwMFomWC1BbXotRXhwaXJlcz0zMDAmWC1BbXotU2lnbmF0dXJlPTM1NGI1N2FmODQ2YmNlZGJlMGI3M2M3NjM5ZTc0NTFhZDA3YjgxNTVhMDY2YjE2NjZkZGIwMzYxNDQyYmFhMTAmWC1BbXotU2lnbmVkSGVhZGVycz1ob3N0JmFjdG9yX2lkPTAma2V5X2lkPTAmcmVwb19pZD0wIn0.VF6P5NvDH520NSFxdkvGX14Z5TPqWQoJ3VNoJ-yvE_w)
"properties": {
"format": "json",
"timezone": "Asia/Shanghai",
"read_json_by_line": "true",
"send_batch_parallelism": 10,
"memtable_on_sink_node": "true",
"columns": "time,time=from_unixtime(round(time/1000,0)),namespace_code,device_instance_property_code,device_instance_code,value,kafka_time,kafka_time=from_unixtime(round(kafka_time/1000,0))"
},
My FE Config
enable_single_replica_load = true
fetch_stream_load_record_interval_second = 30
My BE Config
number_tablet_writer_threads = 48
streaming_load_json_max_mb = 1024
enable_single_replica_load = true
jsonb_type_length_soft_limit_bytes = 2147483643
string_type_length_soft_limit_bytes = 2147483643
enable_stream_load_record = true
max_send_batch_parallelism_per_job = 20
1FE 3BE
![image](https://private-user-images.githubusercontent.com/37656068/333173266-4ad477c5-b38e-4862-a09b-2d4dd843d99e.png?jwt=eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJpc3MiOiJnaXRodWIuY29tIiwiYXVkIjoicmF3LmdpdGh1YnVzZXJjb250ZW50LmNvbSIsImtleSI6ImtleTUiLCJleHAiOjE3MTk0MDkyMDAsIm5iZiI6MTcxOTQwODkwMCwicGF0aCI6Ii8zNzY1NjA2OC8zMzMxNzMyNjYtNGFkNDc3YzUtYjM4ZS00ODYyLWEwOWItMmQ0ZGQ4NDNkOTllLnBuZz9YLUFtei1BbGdvcml0aG09QVdTNC1ITUFDLVNIQTI1NiZYLUFtei1DcmVkZW50aWFsPUFLSUFWQ09EWUxTQTUzUFFLNFpBJTJGMjAyNDA2MjYlMkZ1cy1lYXN0LTElMkZzMyUyRmF3czRfcmVxdWVzdCZYLUFtei1EYXRlPTIwMjQwNjI2VDEzMzUwMFomWC1BbXotRXhwaXJlcz0zMDAmWC1BbXotU2lnbmF0dXJlPTRlN2Q2N2IzODIxNWJlNDY1NjVjMzg1OWQ3NDJhZjZkYTgwZGIyYzg4Y2E2NjNlMTMwMzcyZDQyYmU2N2ZiNmQmWC1BbXotU2lnbmVkSGVhZGVycz1ob3N0JmFjdG9yX2lkPTAma2V5X2lkPTAmcmVwb19pZD0wIn0.p9rRpsMG8FCJG3m1hHDhVV4DxYMITiKSZMy0W0lEPQQ)
4 * 64G 32vCpu
StreamLoad Result
Sometimes it is like this.
![image](https://private-user-images.githubusercontent.com/37656068/333175070-bf3e121d-900c-4855-a782-fe8297b0b7a0.png?jwt=eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJpc3MiOiJnaXRodWIuY29tIiwiYXVkIjoicmF3LmdpdGh1YnVzZXJjb250ZW50LmNvbSIsImtleSI6ImtleTUiLCJleHAiOjE3MTk0MDkyMDAsIm5iZiI6MTcxOTQwODkwMCwicGF0aCI6Ii8zNzY1NjA2OC8zMzMxNzUwNzAtYmYzZTEyMWQtOTAwYy00ODU1LWE3ODItZmU4Mjk3YjBiN2EwLnBuZz9YLUFtei1BbGdvcml0aG09QVdTNC1ITUFDLVNIQTI1NiZYLUFtei1DcmVkZW50aWFsPUFLSUFWQ09EWUxTQTUzUFFLNFpBJTJGMjAyNDA2MjYlMkZ1cy1lYXN0LTElMkZzMyUyRmF3czRfcmVxdWVzdCZYLUFtei1EYXRlPTIwMjQwNjI2VDEzMzUwMFomWC1BbXotRXhwaXJlcz0zMDAmWC1BbXotU2lnbmF0dXJlPWJkYzAyMDY4ZGIzMmQ3YmJjNTY5NDhhNzBjNDQ3N2RhNDQ5OTRlOTU0M2M0NmU0MDNkNDZiZDc3ZDMwODE5MWImWC1BbXotU2lnbmVkSGVhZGVycz1ob3N0JmFjdG9yX2lkPTAma2V5X2lkPTAmcmVwb19pZD0wIn0.LU65HMf7lsO2xojF4uLNNZoC8QovJjMpX4_fZLyZL4Q)
What You Expected?
During my previous tests, with 10 concurrent processes, each committing a million writes, it would not take more than 10 seconds. However, I don't understand why writing to the new cluster has become so slow now. This cluster only contains this single table and has plenty of resources. I hope to find out what the problem is.
How to Reproduce?
No response
Anything Else?
No response
Are you willing to submit PR?
Code of Conduct
The text was updated successfully, but these errors were encountered: