Skip to content

error syncing pod - failed to start container artifact (python SDK) #19622

@damccorm

Description

@damccorm

Error syncing pod 5966....e59c ("<container name>-08131110-7hcg-harness-fbm2_default(5966....e59c)"),
skipping: failed to "StartContainer" for "artifact" with CrashLoopBackOff: "Back-off 5m0s restarting
failed container=artifact pod=<container name>-08131110-7hcg-harness-fbm2_default(5966.....e59c)"

Seeing these in streaming pipeline. Running pipeline in batch mode I'm not seeing anything. Messages appear about every 0.5 - 5 seconds

I've been trying to efficiently scale my streaming pipeline and found that adding more workers / dividing into more groups isn't scaling as well as I expect. Perhaps this is contributing (how do I tell if workers are being utilized or not?)

One pipeline which never completed (got to one of the last steps and then log messages simply ceased without error on the workers) had this going on in the kubelet logs. I checked some of my other streaming pipelines and found the same thing going on, even though they would complete.

In a couple of my streaming pipelines, I've gotten the following error message, despite the pipeline eventually finishing:


Processing stuck in step s01 for at least 05m00s without outputting or completing in state process

Perhaps they are related?

This is running with 5 or 7 (or more) workers in streaming mode. I don't see this when running with 1 worker

The pipeline uses requirements.txt and setup.py, as well as using an extra package and using save_main_session.

Imported from Jira BEAM-7975. Original Jira may contain additional context.
Reported by: jimpremise.

Metadata

Metadata

Assignees

No one assigned

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions