-
Notifications
You must be signed in to change notification settings - Fork 48
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Extremely slow image pulls in Singapore region #2390
Comments
We are also observing same issue on our EC2 server in Singapore region. The issue if for our private dockerhub repositories but public repositories are working fine. We have checked from Mumbai (ap-south-1) and the image pull works there. |
It also fails to me in Singapore region ap-southeast-1, I wrote to dockerhub support a few hours ago. It works fine to me in us-east-1 and in eu-west-1 |
Hi folks, we're looking into this. If you open a support ticket, we can (securely) collect some information that will help us troubleshoot the issue with our CDN provider. Thanks! |
Hi, I opened it 9 hours ago :/ this is the thread thread::UL33nXfrEZOcIDe5SD0qHAQ:: |
We are also facing exact same issue of from AWS Fargate service in both Singapore and Jakarta region. Also the issue is intermittent, and affecting specific layers of an image, in our case for private org/repo images. |
Created a support case as well should any further information be required. Testing again today and I can see there has been some improvement. Previously image Edit: Adding the thread ID for the support case |
i'm experiencing slow docker image pull from our AWS |
Oh my, yes, same for us. its driving us crazy. |
👍 Thanks @jenademoodley for encountering this issue and raising this. The slow down is becoming a major issue across our clusters located in |
Also experiencing the same issue when pulling from a residential ISP in Singapore (M1). Some layers are quick to download whilst the others are barely moving at all. |
We did some traceroute-ing from an eks cluster in ap-southeast-1 after experiencing failed image pulls for over an hour on some images. We've seen a variety of symptoms including:
During the traceroute, we noted that ap-southeast-1 calls to docker.io (not necessarily the registry URL), were getting routed to aws us-east-1 (based on public IP). Is there a location closer that it should have been routed to? After discussing with AWS, their suggestion was to use something like ECR or hosting the image registry ourselves. Until we have this set up, we've effectively cut off the ap-southeast-1 region from our deploys and customer interactions, since we currently have a deployment mechanism that waits for deployments in k8s to become ready before proceeding (and times out after 1hr, even if it's just rolling 3 pods). We are also going to start testing docker's MTR report from eks node in ap-southeast-1
MTR report from my local laptop in the Boston area
MTR report from eks node in us-east-1
EDIT: Submitted support case with docker that linked to this thread. Case ID 00106860 EDIT2: it's also worth noting that this may be more than just docker image pulls, I frequently also get disconnected from the k8s api when trying to access the cluster api itself in ap-southeast-1 from the us-east-1 area (local, not within AWS) EDIT: I'm starting to see k8s api EOFs and Internal errors in ap-south-1 as well (no reports on image pull issues) and worth calling out this report of undersea cable cuts in vietnam https://www.reuters.com/world/asia-pacific/three-vietnams-five-undersea-internet-cables-are-down-2024-06-17/ |
Issue has recovered for me, pulls are quick to download again |
Not in ap-southeast-1, this is really painful and it's even worse to not have any answer after 4 days from docker the support team. |
pulls are still extremely slow for us in Singapore(ap-southeast-1) and Jakarta(ap-southeast-3) region. downloads of some specific layers are taking long time. things were better yesterday and over weekend. |
Are there any workaround for this, for example changing the ec2 instance to another region? |
Same issue here. It happens locally in our office and on all GCP/AWS machines located in asia-southeast (Singapore). Any image we pull from docker.io takes hours now. We got around it by using GCP's artifact registry for some of the images that we customised. |
Same issue here. It happens locally in our office and on all GCP/AWS machines located in asia-southeast-b (Singapore). Any image we pull from docker.io takes hours now. We got around it by using GCP's artifact registry for some of the images that we customised. |
Facing same issue on all my EKS clusters running in Singapore region since Friday. The problem is specific to Docker registry. For quay.io it works fine. |
facing same issue :/ |
Yes, I am facing the same issue. Does anyone have answer please share it I am stuck in between. |
We should probably avoid spamming and just +1 the issue to show how many of us are facing the issue. |
+1 |
Same issue here +1 |
+1 |
same issue +1 |
Same issue here +1 It takes forever for just pull my public repo in docker hub.
|
+1 |
1 similar comment
+1 |
I am seeing the same issue. |
same issue +1 |
+1 |
2 similar comments
+1 |
+1 |
It's getting better guys !!! |
Problem description
grafana/oncall:v1.3.115
image has issues with layers 7, 10, and 12:Task List
The text was updated successfully, but these errors were encountered: