Skip to content

tests::three_logserver_seal_empty seems to be flaky #2871

@tillrohrmann

Description

@tillrohrmann

The test tests::three_logserver_seal_empty seems to fail on CI.

https://github.com/restatedev/restate/actions/runs/13759772471/job/38473253109#step:12:2045

test tests::three_logserver_seal_empty ... 2025-03-10T08:18:09.853722Z  INFO restate_local_cluster_runner::node: Setting the metadata server to replicated
2025-03-10T08:18:09.864896Z  INFO restate_local_cluster_runner::cluster: Starting cluster local-cluster in /tmp/.tmpm7gsid
2025-03-10T08:18:09.867811Z  INFO restate_local_cluster_runner::node: Started node node-1 in /tmp/.tmpm7gsid/node-1 (pid 105428) node_address=unix:/tmp/.tmpm7gsid/node-1/node.sock admin_address=None ingress_address=None
2025-03-10T08:18:09.867902Z  INFO restate_local_cluster_runner::node: To connect to node node-1 using restate CLI:
export RESTATE_ADMIN_URL=http://None
2025-03-10T08:18:09.870209Z  INFO restate_local_cluster_runner::node: Started node node-2 in /tmp/.tmpm7gsid/node-2 (pid 105429) node_address=unix:/tmp/.tmpm7gsid/node-2/node.sock admin_address=None ingress_address=None
2025-03-10T08:18:09.870320Z  INFO restate_local_cluster_runner::node: To connect to node node-2 using restate CLI:
export RESTATE_ADMIN_URL=http://None
2025-03-10T08:18:09.874625Z  INFO restate_local_cluster_runner::node: Started node node-3 in /tmp/.tmpm7gsid/node-3 (pid 105469) node_address=unix:/tmp/.tmpm7gsid/node-3/node.sock admin_address=None ingress_address=None
2025-03-10T08:18:09.874693Z  INFO restate_local_cluster_runner::node: To connect to node node-3 using restate CLI:
export RESTATE_ADMIN_URL=http://None
2025-03-10T08:18:10.387314Z  INFO restate_local_cluster_runner::node: Node node-1 MetadataServer check is healthy after 3 attempts
2025-03-10T08:18:10.399442Z  INFO restate_local_cluster_runner::node: Node node-1 LogServer check is healthy after 3 attempts
2025-03-10T08:18:10.400831Z  INFO restate_local_cluster_runner::node: Node node-2 LogServer check is healthy after 3 attempts
2025-03-10T08:18:10.646394Z  INFO restate_local_cluster_runner::node: Node node-2 MetadataServer check is healthy after 4 attempts
2025-03-10T08:18:10.685571Z  INFO restate_local_cluster_runner::node: Node node-3 LogServer check is healthy after 4 attempts
2025-03-10T08:18:10.900658Z  INFO restate_local_cluster_runner::node: Node node-3 MetadataServer check is healthy after 5 attempts
2025-03-10T08:18:10.982741Z  INFO server{server_name=node-rpc-server uds.path="/tmp/.tmpm7gsid/replicated-loglet-client/node.sock"}: restate_core::network::net_util: Server listening
2025-03-10T08:18:10.982990Z  INFO restate_node::init: Trying to join the cluster 'local-cluster'
2025-03-10T08:18:11.100631Z  INFO restate_node::init: My Node ID is N5:1 roles= address=unix:/tmp/.tmpm7gsid/replicated-loglet-client/node.sock location=
2025-03-10T08:18:11.211648Z  WARN restate_types::config::bifrost: LocalLoglet rocksdb_memory_budget is not set, defaulting to 1MB
2025-03-10T08:18:11.211697Z  WARN restate_types::config::bifrost: LocalLoglet rocksdb_memory_budget is not set, defaulting to 1MB
2025-03-10T08:18:11.454054Z  WARN seal{otel.name="replicated_loglet: seal"}:run: restate_bifrost::providers::replicated_loglet::tasks::seal: Cannot seal the loglet as all nodeset members are in `Provisioning` storage state loglet_id=1_0 is_sealed=false
2025-03-10T08:18:11.454208Z  WARN restate_local_cluster_runner::node: Node node-1 (pid 105428) dropped without explicit shutdown
2025-03-10T08:18:11.455509Z  WARN restate_local_cluster_runner::node: Node node-2 (pid 105429) dropped without explicit shutdown
2025-03-10T08:18:11.456143Z  WARN restate_local_cluster_runner::node: Node node-3 (pid 105469) dropped without explicit shutdown
2025-03-10T08:18:11.456476Z  INFO shutdown_node{reason="completed"}: restate_core::task_center: ** Shutdown requested reason=completed
2025-03-10T08:18:11.456775Z  INFO restate_node: Restate node roles [] were started
2025-03-10T08:18:11.457364Z  INFO server{server_name=node-rpc-server uds.path="/tmp/.tmpm7gsid/replicated-loglet-client/node.sock"}: restate_core::network::net_util: Stopped listening
2025-03-10T08:18:11.458059Z  INFO restate_core::metadata::manager: Metadata manager stopped
2025-03-10T08:18:11.458239Z  INFO restate_bifrost::providers::memory_loglet: Shutting down in-memory loglet provider
2025-03-10T08:18:11.458883Z  INFO shutdown_node{reason="completed"}: restate_core::task_center: ** Shutdown completed in 2.407945ms
FAILED

failures:

failures:
    tests::three_logserver_seal_empty

test result: FAILED. 0 passed; 1 failed; 0 ignored; 0 measured; 8 filtered out; finished in 1.61s

──── STDERR:             restate-server::replicated_loglet tests::three_logserver_seal_empty
!!!! Running with test-util enabled !!!!
Will retain local cluster runner tempdir upon cluster drop: /tmp/.tmpm7gsid
node-2	| !!!! Running with test-util enabled !!!!
node-3	| !!!! Running with test-util enabled !!!!
node-1	| !!!! Running with test-util enabled !!!!
node-1	| 2025-03-10T08:18:09.938662Z INFO restate_server
node-1	|   Starting Restate Server 1.2.2-dev (412fabb x86_64-unknown-linux-gnu 2025-03-10)
node-1	|     node_name: "node-1"
node-1	|     config_source: /tmp/.tmpm7gsid/node-1/config.toml
node-1	|     base_dir: /tmp/.tmpm7gsid/node-1/
node-1	| on main
node-3	| 2025-03-10T08:18:09.936277Z INFO restate_server
node-3	|   Starting Restate Server 1.2.2-dev (412fabb x86_64-unknown-linux-gnu 2025-03-10)
node-3	|     node_name: "node-3"
node-3	|     config_source: /tmp/.tmpm7gsid/node-3/config.toml
node-3	|     base_dir: /tmp/.tmpm7gsid/node-3/
node-3	| on main
node-2	| 2025-03-10T08:18:09.952248Z INFO restate_server
node-2	|   Starting Restate Server 1.2.2-dev (412fabb x86_64-unknown-linux-gnu 2025-03-10)
node-2	|     node_name: "node-2"
node-2	|     config_source: /tmp/.tmpm7gsid/node-2/config.toml
node-2	|     base_dir: /tmp/.tmpm7gsid/node-2/
node-2	| on main
node-3	| 2025-03-10T08:18:10.303412Z INFO restate_core::network::net_util
node-3	|   Server listening
node-3	| on rs:worker-13
node-3	|   in restate_core::network::net_util::server
node-3	|     server_name: node-rpc-server 
node-3	|     uds.path: "/tmp/.tmpm7gsid/node-3/node.sock"
node-3	| 2025-03-10T08:18:10.303746Z INFO restate_node::init
node-3	|   Trying to join the cluster 'local-cluster'
node-3	| on rs:worker-13
node-3	| 2025-03-10T08:18:10.304685Z INFO restate_metadata_server::raft::server
node-3	|   Cluster has not been provisioned, yet. Awaiting provisioning via `restatectl provision`
node-3	| on rs:worker-13
node-1	| 2025-03-10T08:18:10.311478Z INFO restate_core::network::net_util
node-1	|   Server listening
node-1	| on rs:worker-15
node-1	|   in restate_core::network::net_util::server
node-1	|     server_name: node-rpc-server 
node-1	|     uds.path: "/tmp/.tmpm7gsid/node-1/node.sock"
node-1	| 2025-03-10T08:18:10.312435Z INFO restate_node::init
node-1	|   Trying to join the cluster 'local-cluster'
node-1	| on rs:worker-15
node-2	| 2025-03-10T08:18:10.329884Z INFO restate_core::network::net_util
node-2	|   Server listening
node-2	| on rs:worker-0
node-2	|   in restate_core::network::net_util::server
node-2	|     server_name: node-rpc-server 
node-2	|     uds.path: "/tmp/.tmpm7gsid/node-2/node.sock"
node-2	| 2025-03-10T08:18:10.33[1852](https://github.com/restatedev/restate/actions/runs/13759772471/job/38473253109#step:12:1853)Z INFO restate_node::init
node-2	|   Trying to join the cluster 'local-cluster'
node-2	| on rs:worker-0
node-2	| 2025-03-10T08:18:10.333472Z INFO restate_metadata_server::raft::server
node-2	|   Cluster has not been provisioned, yet. Awaiting provisioning via `restatectl provision`
node-2	| on rs:worker-14
node-1	| 2025-03-10T08:18:10.338797Z INFO restate_metadata_server::raft::server
node-1	|   Run as member of the metadata cluster
node-1	|     configuration: v1; [N1]
node-1	| on rs:worker-15
node-1	|   in restate_metadata_server::raft::server::run
node-1	|     member_id: N1:30f
node-1	| 2025-03-10T08:18:10.360196Z INFO restate_metadata_server::raft::server
node-1	|   Won metadata cluster leadership
node-1	| on rs:worker-15
node-1	|   in restate_metadata_server::raft::server::run
node-1	|     member_id: N1:30f
node-1	| 2025-03-10T08:18:10.411320Z INFO restate_node
node-1	|   Cluster 'local-cluster' has been automatically provisioned
node-1	| on rs:worker-0
node-2	| 2025-03-10T08:18:10.420210Z INFO restate_node::init
node-2	|   My Node ID is N2:1
node-2	|     roles: metadata-server | log-server
node-2	|     address: unix:/tmp/.tmpm7gsid/node-2/node.sock
node-2	|     location: 
node-2	| on rs:worker-0
node-1	| 2025-03-10T08:18:10.487742Z INFO restate_metadata_server::raft::server
node-1	|   Adding node 'N2' to metadata cluster
node-1	| on rs:worker-0
node-1	|   in restate_metadata_server::raft::server::run
node-1	|     member_id: N1:30f
node-3	| 2025-03-10T08:18:10.493745Z INFO restate_node::init
node-3	|   My Node ID is N3:1
node-3	|     roles: metadata-server | log-server
node-1	| 2025-03-10T08:18:10.495967Z INFO restate_metadata_server::raft::server
node-3	|     address: unix:/tmp/.tmpm7gsid/node-3/node.sock
node-1	|   Applied new configuration
node-1	|     configuration: v2; [N1, N2]
node-3	|     location: 
node-1	| on rs:worker-0
node-3	| on rs:worker-11
node-1	|   in restate_metadata_server::raft::server::run
node-1	|     member_id: N1:30f
node-1	| 2025-03-10T08:18:10.508237Z INFO restate_node::init
node-1	|   My Node ID is N1:2
node-1	|     roles: metadata-server | log-server
node-1	|     address: unix:/tmp/.tmpm7gsid/node-1/node.sock
node-1	|     location: 
node-1	| on rs:worker-11
node-2	| 2025-03-10T08:18:10.520359Z INFO restate_metadata_server::raft::server
node-2	|   Run as member of the metadata cluster
node-2	|     configuration: v0; []
node-2	| on rs:worker-11
node-2	|   in restate_metadata_server::raft::server::run
node-2	|     member_id: N2:375
node-1	| 2025-03-10T08:18:10.531658Z INFO restate_metadata_server::raft::server
node-1	|   Adding node 'N3' to metadata cluster
node-1	| on rs:worker-0
node-1	|   in restate_metadata_server::raft::server::run
node-1	|     member_id: N1:30f
node-2	| 2025-03-10T08:18:10.682184Z INFO restate_metadata_server::raft::server
node-2	|   Restored configuration from snapshot
node-2	|     configuration: v2; [N1, N2]
node-2	| on rs:worker-0
node-2	|   in restate_metadata_server::raft::server::run
node-2	|     member_id: N2:375
node-1	| 2025-03-10T08:18:10.736983Z INFO restate_metadata_server::raft::server
node-1	|   Applied new configuration
node-1	|     configuration: v3; [N1, N3, N2]
node-1	| on rs:worker-13
node-1	|   in restate_metadata_server::raft::server::run
node-1	|     member_id: N1:30f
node-3	| 2025-03-10T08:18:10.785720Z INFO restate_metadata_server::raft::server
node-2	| 2025-03-10T08:18:10.787072Z INFO restate_metadata_server::raft::server
node-3	|   Run as member of the metadata cluster
node-2	|   Applied new configuration
node-3	|     configuration: v0; []
node-2	|     configuration: v3; [N1, N2, N3]
node-3	| on rs:worker-10
node-2	| on rs:worker-14
node-3	|   in restate_metadata_server::raft::server::run
node-2	|   in restate_metadata_server::raft::server::run
node-3	|     member_id: N3:3be
node-2	|     member_id: N2:375
node-3	| 2025-03-10T08:18:11.01[1893](https://github.com/restatedev/restate/actions/runs/13759772471/job/38473253109#step:12:1894)Z INFO restate_metadata_server::raft::server
node-3	|   Restored configuration from snapshot
node-3	|     configuration: v3; [N3, N1, N2]
node-3	| on rs:worker-0
node-3	|   in restate_metadata_server::raft::server::run
node-3	|     member_id: N3:3be
node-1	| 2025-03-10T08:18:11.122352Z INFO restate_node
node-1	|   Restate node roles [metadata-server | log-server] were started
node-1	| on rs:worker-4
node-3	| 2025-03-10T08:18:11.211473Z INFO restate_node
node-3	|   Restate node roles [metadata-server | log-server] were started
Error: could not seal loglet because insufficient nodes confirmed the seal. The nodeset status is []

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions