Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

seems the initializing of task is very slow, if it can be improved? #1763

Closed
shaoyangyu opened this issue Oct 31, 2017 · 4 comments
Closed

Comments

@shaoyangyu
Copy link

Hi there!
Using latest concourse docker image, run on ubuntu vm which is on ESXI6.

I used dcind(https://hub.docker.com/r/amidos/dcind/) for my task image, but seems every time the initializing of task is very time consuming(need 5 or 10 mins), not sure if I did wrong configuration or other else. any help on this? thanks/

@jtarchie
Copy link
Contributor

jtarchie commented Nov 1, 2017

@shaoyangyu, are you able to provide more information about your deployment? The template for an issue, which helps guide in creating a decisive issue.

I've copied and pasted below.

Bug Report

Bug reports are pretty free-form; just replace this with whatever. You can also help us triage the issue by including steps to reproduce, expected results, and the actual result. Help us help you!

The following can also be handy:

  • Concourse version:
  • Deployment type (BOSH/Docker/binary):
  • Infrastructure/IaaS:
  • Browser (if applicable):
  • Did this used to work?

@shaoyangyu
Copy link
Author

hi, @jtarchie

Sorry for bad organised infor.

Below are the issue's description:

Concourse version: Docker image tag is 3.6.0
Deployment type (BOSH/Docker/binary): Docker
Infrastructure/IaaS: Vmware/ESXI(Vmware/ESXI as hypervisor , and Ubuntu1404 VM as host, Concourse CI is installed on the VM).
Did this used to work? It works, but just very slow for task initializing.
Problem description:

  1. Each time , for task initializing, is very slow, need 5 to 10 mins
  2. Seems the disk IO is very heavy, when the pipeline is running, the other applications which are on the same ESXI will even can't get request and always timeout. But when the pipeline is completed, everything works well, do we have known issue for disk IO?

Below are my pipeline:


resource_types:
- name: git-multibranch
 type: docker-image
 source:
   repository: ((ci_docker_server))/git_multi_branch
   insecure_registries: [ ((ci_docker_server)) ]

resources:
- name: git_source_code
 type: git-multibranch
 source:
   uri: ((git_source_repository))
   ignore_branches: '(master|develop|CICD/Deploy|CICD/Build)'
   branch: ((branch))
   private_key: ((gitee_id_key))

- name: git_source_code_tag
 type: git
 source:
   uri: ((git_source_repository))
   branch: ((branch))
   private_key: ((gitee_id_key))
   disable_ci_skip: true

- name: image_task_dcind
 type: docker-image
 source:
   repository: ((ci_docker_server))/dcind
   tag: ruby
   insecure_registries: [ ((ci_docker_server)) ]


jobs:
- name: build[((proj_name))]
 serial: true
 build_logs_to_retain: 2
 plan:
 - aggregate:
   - get: git_source_code
     trigger: true
     params:
       depth: 1
       submodules: none
   - get: image_task_dcind
 - task: build
   privileged: true
   attempts: 2
   image: image_task_dcind
   file: git_source_code/ci/build.yml
   params:
     TAG: ((ci_image_tag))
     DOCKER_SERVER: ((ci_docker_server))
     RELEASE_DOCKER_SERVER: ((release_docker_server))
     BASE_IMAGE: ((base_image))

- name: test[((proj_name))]
 serial: true
 build_logs_to_retain: 2
 plan:
 - aggregate:
   - get: git_source_code
     trigger: true
     passed:
     - build[((proj_name))]
     params:
       depth: 1
       submodules: none
   - get: image_task_dcind

 - task: integration_test
   privileged: true
   image: image_task_dcind
   file: git_source_code/ci/test.yml
   params:
     DEBUG_YAML: 'docker/docker-compose.ci.yml'
     DOCKER_SERVER: ((ci_docker_server))

- name: release[((proj_name))]
 serial: true
 build_logs_to_retain: 2
 plan:
 - aggregate:
   - get: git_source_code
     trigger: true
     passed:
     - test[((proj_name))]
     params:
       depth: 1
       submodules: none
   - get: image_task_dcind
 - task: release_to_docker
   privileged: true
   image: image_task_dcind
   file: git_source_code/ci/release.yml
   params:
     DOCKER_SERVER: ((ci_docker_server))
     RELEASE_DOCKER_SERVER: ((release_docker_server))
     IMAGE_NAME: ((image_name))
     IMAGE_TAG: ((ci_image_tag))
     ali_docker_user: ((ali_docker_user))
     ali_docker_password: ((ali_docker_password))


- name: deploy[((proj_name))]
 serial: true
 build_logs_to_retain: 2
 plan:
 - aggregate:
   - get: git_source_code_tag
     params:
       submodules: none
   - get: git_source_code
     passed:
     - release[((proj_name))]
     trigger: true
     params:
       depth: 1
       submodules: none
   - get: image_task_dcind
 - task: deploy_to_rancher
   privileged: true
   image: image_task_dcind
   file: git_source_code/ci/deploy.yml
   params:
     ENCRYPT_PASS: ((encrypt_pass))
 - put: git_source_code_tag
   get_params:
     disable_git_lfs: true
     submodules: none
   params:
     repository: git_source_code_tag
     only_tag: true
     tag: tag/newtag

Below are config for build task and each task use same docker image.

---
platform: linux
inputs:
  - name: git_source_code
caches:
  - path: git_source_code/vendor
run:
  path: bash
  args:
  - -ec
  - |
      source /docker-lib.sh
      start_docker $DOCKER_SERVER
      ls
      docker pull "$DOCKER_SERVER/$BASE_IMAGE"
      docker tag "$DOCKER_SERVER/$BASE_IMAGE" "$RELEASE_DOCKER_SERVER/$BASE_IMAGE"
      docker images
      pushd git_source_code

      echo $(TZ='Asia/Shanghai' date +%y%m%d%H)>>update.log
      bundle install
      DOCKER_REPO=$DOCKER_SERVER bundle exec rake docker:build
      DOCKER_REPO=$DOCKER_SERVER bundle exec rake docker:release
      popd

@gmile
Copy link

gmile commented Dec 20, 2017

Seeing the same issue when using dcind. I can give an SSH access to a throwaway CI where the problem can be observed, for someone from Concourse team if they are willing to look into it.

@vito
Copy link
Member

vito commented Dec 28, 2017

Folding this into #1404

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants