{"payload":{"feedbackUrl":"https://github.com/orgs/community/discussions/53140","repo":{"id":772270204,"defaultBranch":"main","name":"cgyle","ownerLogin":"SUSE-Enceladus","currentUserCanPush":false,"isFork":false,"isEmpty":false,"createdAt":"2024-03-14T21:29:53.000Z","ownerAvatar":"https://avatars.githubusercontent.com/u/40608559?v=4","public":true,"private":false,"isOrgOwned":true},"refInfo":{"name":"","listCacheKey":"v0:1726508575.0","currentOid":""},"activityList":{"items":[{"before":null,"after":"45fc3b4960abeac9a70428c04b0c6d486ab98c4a","ref":"refs/heads/catalog_retry","pushedAt":"2024-09-16T17:42:55.000Z","pushType":"branch_creation","commitsCount":0,"pusher":{"login":"schaefi","name":"Marcus Schäfer","path":"/schaefi","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/912234?s=80&v=4"},"commit":{"message":"Add catalog retry for get_catalog_podman_search\n\nIf the catalog search fails on a podman search error,\nretry the connection and don't give up too early","shortMessageHtmlLink":"Add catalog retry for get_catalog_podman_search"}},{"before":null,"after":"d6d830ffc5d915d33d907ef86e489598911cf7d0","ref":"refs/heads/delay_restart","pushedAt":"2024-09-16T16:49:32.000Z","pushType":"branch_creation","commitsCount":0,"pusher":{"login":"schaefi","name":"Marcus Schäfer","path":"/schaefi","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/912234?s=80&v=4"},"commit":{"message":"Add delay time for service restart\n\nWhen cgyle starts the intermediate container to do the job and\nsomething fails, then cgyle exits with an exit code != 0. At the same\ntime but disconnected from the cgyle process controlled by systemd\nthe podman command running in the background ends and cleans up its\nresources. I can imagine that systemd tries to immediately start a\nnew cgyle process while the podman cleanup of the former is not yet\ncomplete. A second run of the container on the same network port\nleads to conflict. This commit adds a restart delay to prevent\nthis condition","shortMessageHtmlLink":"Add delay time for service restart"}},{"before":"be6267c4c706f475ca9dc9ed76e35726757c4785","after":"fb4d3a637705a3583de6008e0aba1ebb2071bcf1","ref":"refs/heads/main","pushedAt":"2024-09-12T13:40:11.000Z","pushType":"push","commitsCount":1,"pusher":{"login":"schaefi","name":"Marcus Schäfer","path":"/schaefi","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/912234?s=80&v=4"},"commit":{"message":"Bump version: 1.1.3 → 1.1.4","shortMessageHtmlLink":"Bump version: 1.1.3 → 1.1.4"}},{"before":"10abd9509926a6a450de20acdd1008e3c99fb983","after":null,"ref":"refs/heads/better_timing_no_hard_stop","pushedAt":"2024-09-12T13:39:46.000Z","pushType":"branch_deletion","commitsCount":0,"pusher":{"login":"schaefi","name":"Marcus Schäfer","path":"/schaefi","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/912234?s=80&v=4"}},{"before":"39803e28a81bc7657b5d1ccf4a8b2cdbafa4745f","after":"be6267c4c706f475ca9dc9ed76e35726757c4785","ref":"refs/heads/main","pushedAt":"2024-09-12T13:39:43.000Z","pushType":"pr_merge","commitsCount":2,"pusher":{"login":"schaefi","name":"Marcus Schäfer","path":"/schaefi","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/912234?s=80&v=4"},"commit":{"message":"Merge pull request #30 from SUSE-Enceladus/better_timing_no_hard_stop\n\nPrevent thundering herd condition","shortMessageHtmlLink":"Merge pull request #30 from SUSE-Enceladus/better_timing_no_hard_stop"}},{"before":"78f26843ee84813035762a96b2889da6fdf7e5cd","after":"10abd9509926a6a450de20acdd1008e3c99fb983","ref":"refs/heads/better_timing_no_hard_stop","pushedAt":"2024-09-12T13:19:57.000Z","pushType":"force_push","commitsCount":0,"pusher":{"login":"schaefi","name":"Marcus Schäfer","path":"/schaefi","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/912234?s=80&v=4"},"commit":{"message":"Prevent thundering herd condition\n\nThe infrastructure runs several network bandwidth consuming\nprocesses potentially at the same time. To avoid this add a\nrandom delay on the timer. In addition we found that the\nhard stop in the service file hits us in the production\nenvironment. Let's add a restart condition on failure for\nthis case too","shortMessageHtmlLink":"Prevent thundering herd condition"}},{"before":null,"after":"78f26843ee84813035762a96b2889da6fdf7e5cd","ref":"refs/heads/better_timing_no_hard_stop","pushedAt":"2024-09-12T13:15:47.000Z","pushType":"branch_creation","commitsCount":0,"pusher":{"login":"schaefi","name":"Marcus Schäfer","path":"/schaefi","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/912234?s=80&v=4"},"commit":{"message":"Prevent thundering herd condition\n\nThe infrastructure runs several network bandwidth consuming\nprocesses potentially at the same time. To avoid this add a\nrandom delay on the timer. In addition we found that the\nhard stop in the service file hits us in the production\nenvironment. Given we have set a retry count on the skopeo\nlevel it is not required to set a max runtime on the service","shortMessageHtmlLink":"Prevent thundering herd condition"}},{"before":"32508b7fb1db4baafcd2d50906d753c86c0209de","after":"39803e28a81bc7657b5d1ccf4a8b2cdbafa4745f","ref":"refs/heads/main","pushedAt":"2024-09-06T09:59:05.000Z","pushType":"push","commitsCount":1,"pusher":{"login":"schaefi","name":"Marcus Schäfer","path":"/schaefi","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/912234?s=80&v=4"},"commit":{"message":"Bump version: 1.1.2 → 1.1.3","shortMessageHtmlLink":"Bump version: 1.1.2 → 1.1.3"}},{"before":"b1678a00f9084bbe4d352f7afddc9c1809151bc6","after":null,"ref":"refs/heads/max_time_for_service_run","pushedAt":"2024-09-06T09:58:40.000Z","pushType":"branch_deletion","commitsCount":0,"pusher":{"login":"schaefi","name":"Marcus Schäfer","path":"/schaefi","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/912234?s=80&v=4"}},{"before":"1e23a12bc482673f73f34f2cfbf4375de9a6e7be","after":"32508b7fb1db4baafcd2d50906d753c86c0209de","ref":"refs/heads/main","pushedAt":"2024-09-06T09:58:38.000Z","pushType":"pr_merge","commitsCount":3,"pusher":{"login":"schaefi","name":"Marcus Schäfer","path":"/schaefi","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/912234?s=80&v=4"},"commit":{"message":"Merge pull request #28 from SUSE-Enceladus/max_time_for_service_run\n\nMake sure cgyle runtime is limited","shortMessageHtmlLink":"Merge pull request #28 from SUSE-Enceladus/max_time_for_service_run"}},{"before":"9d9a62c72a7d686f0e6628233fe66cc733d87804","after":"b1678a00f9084bbe4d352f7afddc9c1809151bc6","ref":"refs/heads/max_time_for_service_run","pushedAt":"2024-09-06T07:15:04.000Z","pushType":"force_push","commitsCount":0,"pusher":{"login":"schaefi","name":"Marcus Schäfer","path":"/schaefi","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/912234?s=80&v=4"},"commit":{"message":"Limit the number of copy tries\n\nskopeo may end up in an endless loop attempting to copy a specific\ncontainer. This creates a problem in an automated environment such as\ncgyle potentially causing endless runtime. We limit the number\nof attempts we let skopeo take to copy a given container to 5. This\nallows us to move on and copy other containers. The next time cgyle runs\nwe will try the previously failed container again.","shortMessageHtmlLink":"Limit the number of copy tries"}},{"before":"fa0b7d9adf8e07e698e34de6edef96ffea45b330","after":null,"ref":"refs/heads/copyAttemptLimit","pushedAt":"2024-09-06T07:14:11.000Z","pushType":"branch_deletion","commitsCount":0,"pusher":{"login":"schaefi","name":"Marcus Schäfer","path":"/schaefi","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/912234?s=80&v=4"}},{"before":"22f73f565e37b9b94c0a027445af79db776f216a","after":"9d9a62c72a7d686f0e6628233fe66cc733d87804","ref":"refs/heads/max_time_for_service_run","pushedAt":"2024-09-06T07:14:05.000Z","pushType":"pr_merge","commitsCount":3,"pusher":{"login":"schaefi","name":"Marcus Schäfer","path":"/schaefi","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/912234?s=80&v=4"},"commit":{"message":"Merge pull request #29 from SUSE-Enceladus/copyAttemptLimit\n\nLimit the number of copy tries","shortMessageHtmlLink":"Merge pull request #29 from SUSE-Enceladus/copyAttemptLimit"}},{"before":"c0d52538058c659223cec99730d44aae5faaf4d1","after":"fa0b7d9adf8e07e698e34de6edef96ffea45b330","ref":"refs/heads/copyAttemptLimit","pushedAt":"2024-09-06T07:08:07.000Z","pushType":"push","commitsCount":1,"pusher":{"login":"schaefi","name":"Marcus Schäfer","path":"/schaefi","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/912234?s=80&v=4"},"commit":{"message":"Added tests and fixed arch specific order of call options","shortMessageHtmlLink":"Added tests and fixed arch specific order of call options"}},{"before":null,"after":"c0d52538058c659223cec99730d44aae5faaf4d1","ref":"refs/heads/copyAttemptLimit","pushedAt":"2024-09-05T18:27:27.000Z","pushType":"branch_creation","commitsCount":0,"pusher":{"login":"rjschwei","name":"Robert Schweikert","path":"/rjschwei","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/1087183?s=80&v=4"},"commit":{"message":"Limit the number of copy tries\n\nskopeo may end up in an endless loop attempting to copy a specific\ncontainer. This creates a problem in an automated environment such as\ncgyle potentially causing endless runtime. We limit the number\nof attempts we let skopeo take to copy a given container to 5. This\nallows us to move on and copy other containers. The next time cgyle runs\nwe will try the previously failed container again.","shortMessageHtmlLink":"Limit the number of copy tries"}},{"before":null,"after":"22f73f565e37b9b94c0a027445af79db776f216a","ref":"refs/heads/max_time_for_service_run","pushedAt":"2024-09-05T13:19:58.000Z","pushType":"branch_creation","commitsCount":0,"pusher":{"login":"schaefi","name":"Marcus Schäfer","path":"/schaefi","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/912234?s=80&v=4"},"commit":{"message":"Make sure cgyle runtime is limited\n\nThe sync process is run by a timer of 6h at the moment.\nIf the service is still running when the timer kicks in,\nno restart will happen. In cgyle each container is fetched\nby its own skopeo copy process. We found situations in which\nthat copy process receives an error and reconnects in a loop\nforever. As of today there is no way to tell skopeo to stop\nthis iteration after some time and I also believe there\nis a bug in skopeo causing this because a simple restart\nof the cgyle service fixes the issue. For us it's important\nthat the cgyle process is not blocked forever. Thus this\ncommit suggest a max runtime controlled by systemd of 5h.","shortMessageHtmlLink":"Make sure cgyle runtime is limited"}},{"before":"26ff6e74daf0c9973c22803fb2e38d6a6706aa92","after":"1e23a12bc482673f73f34f2cfbf4375de9a6e7be","ref":"refs/heads/main","pushedAt":"2024-08-16T07:25:07.000Z","pushType":"push","commitsCount":1,"pusher":{"login":"schaefi","name":"Marcus Schäfer","path":"/schaefi","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/912234?s=80&v=4"},"commit":{"message":"Bump version: 1.1.1 → 1.1.2","shortMessageHtmlLink":"Bump version: 1.1.1 → 1.1.2"}},{"before":"491a58a9d0a14bb935b5cfb2164d7bfb7b0306a6","after":null,"ref":"refs/heads/check_state_file","pushedAt":"2024-08-16T07:24:39.000Z","pushType":"branch_deletion","commitsCount":0,"pusher":{"login":"schaefi","name":"Marcus Schäfer","path":"/schaefi","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/912234?s=80&v=4"}},{"before":"df93bf9023c9345da46bc5403c3da3886acc158b","after":"26ff6e74daf0c9973c22803fb2e38d6a6706aa92","ref":"refs/heads/main","pushedAt":"2024-08-16T07:24:36.000Z","pushType":"pr_merge","commitsCount":2,"pusher":{"login":"schaefi","name":"Marcus Schäfer","path":"/schaefi","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/912234?s=80&v=4"},"commit":{"message":"Merge pull request #27 from SUSE-Enceladus/check_state_file\n\nAdded sanity check/cleanup for the scheduler state","shortMessageHtmlLink":"Merge pull request #27 from SUSE-Enceladus/check_state_file"}},{"before":null,"after":"491a58a9d0a14bb935b5cfb2164d7bfb7b0306a6","ref":"refs/heads/check_state_file","pushedAt":"2024-08-15T14:15:39.000Z","pushType":"branch_creation","commitsCount":0,"pusher":{"login":"schaefi","name":"Marcus Schäfer","path":"/schaefi","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/912234?s=80&v=4"},"commit":{"message":"Added sanity check/cleanup for the scheduler state\n\nThe CNCF distribution registry stores scheduler metadata\nin the file scheduler-state.json. Per issue report at\nhttps://github.com/distribution/distribution/issues/2374\nit can happen that the state file gets corrupted. In the case\nwere we hit this error the JSON file was missing the closing\nbrackets. To recover from this situtation we load the state\nfile and delete it in case of a JSONDecodeError.","shortMessageHtmlLink":"Added sanity check/cleanup for the scheduler state"}},{"before":"b8c28189f08b8526cddc81cb0fd5fb862c5a4498","after":"df93bf9023c9345da46bc5403c3da3886acc158b","ref":"refs/heads/main","pushedAt":"2024-08-14T16:00:29.000Z","pushType":"push","commitsCount":1,"pusher":{"login":"schaefi","name":"Marcus Schäfer","path":"/schaefi","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/912234?s=80&v=4"},"commit":{"message":"Bump version: 1.1.0 → 1.1.1","shortMessageHtmlLink":"Bump version: 1.1.0 → 1.1.1"}},{"before":"48ee33eb4bf2ebe46d040d405597cfee545d9eb0","after":null,"ref":"refs/heads/cleanup_script","pushedAt":"2024-08-14T15:59:52.000Z","pushType":"branch_deletion","commitsCount":0,"pusher":{"login":"schaefi","name":"Marcus Schäfer","path":"/schaefi","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/912234?s=80&v=4"}},{"before":"05917047bdcb52eaab585902352f7279b04d913a","after":"b8c28189f08b8526cddc81cb0fd5fb862c5a4498","ref":"refs/heads/main","pushedAt":"2024-08-14T15:59:49.000Z","pushType":"pr_merge","commitsCount":2,"pusher":{"login":"schaefi","name":"Marcus Schäfer","path":"/schaefi","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/912234?s=80&v=4"},"commit":{"message":"Merge pull request #26 from SUSE-Enceladus/cleanup_script\n\nAdded cgyle-pubcloud-infra-cleanup","shortMessageHtmlLink":"Merge pull request #26 from SUSE-Enceladus/cleanup_script"}},{"before":null,"after":"48ee33eb4bf2ebe46d040d405597cfee545d9eb0","ref":"refs/heads/cleanup_script","pushedAt":"2024-08-14T15:10:11.000Z","pushType":"branch_creation","commitsCount":0,"pusher":{"login":"schaefi","name":"Marcus Schäfer","path":"/schaefi","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/912234?s=80&v=4"},"commit":{"message":"Added cgyle-pubcloud-infra-cleanup\n\nProvider cleanup tool to easy upgrade cgyle within the\nSUSE public cloud infrastructure","shortMessageHtmlLink":"Added cgyle-pubcloud-infra-cleanup"}},{"before":"a45987e180726d533376495d6808aa5be14feb83","after":"05917047bdcb52eaab585902352f7279b04d913a","ref":"refs/heads/main","pushedAt":"2024-08-14T08:03:42.000Z","pushType":"push","commitsCount":1,"pusher":{"login":"schaefi","name":"Marcus Schäfer","path":"/schaefi","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/912234?s=80&v=4"},"commit":{"message":"Bump version: 1.0.16 → 1.1.0","shortMessageHtmlLink":"Bump version: 1.0.16 → 1.1.0"}},{"before":"88c6601e4446dd94b3076606f78da35ec2c98240","after":null,"ref":"refs/heads/decouple_cgyle_container_run_from_network","pushedAt":"2024-08-14T08:03:16.000Z","pushType":"branch_deletion","commitsCount":0,"pusher":{"login":"schaefi","name":"Marcus Schäfer","path":"/schaefi","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/912234?s=80&v=4"}},{"before":"e6156309b5daf8af7aa8123518de2856ec6dd8a7","after":"a45987e180726d533376495d6808aa5be14feb83","ref":"refs/heads/main","pushedAt":"2024-08-14T08:03:13.000Z","pushType":"pr_merge","commitsCount":3,"pusher":{"login":"schaefi","name":"Marcus Schäfer","path":"/schaefi","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/912234?s=80&v=4"},"commit":{"message":"Merge pull request #23 from SUSE-Enceladus/decouple_cgyle_container_run_from_network\n\nDecouple use of registry container from network","shortMessageHtmlLink":"Merge pull request #23 from SUSE-Enceladus/decouple_cgyle_container_r…"}},{"before":"61a15b6bb9bc74505a0fb9c197ef5cbaa9fce9f5","after":"88c6601e4446dd94b3076606f78da35ec2c98240","ref":"refs/heads/decouple_cgyle_container_run_from_network","pushedAt":"2024-08-14T07:57:52.000Z","pushType":"force_push","commitsCount":0,"pusher":{"login":"schaefi","name":"Marcus Schäfer","path":"/schaefi","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/912234?s=80&v=4"},"commit":{"message":"Update documentation","shortMessageHtmlLink":"Update documentation"}},{"before":"1f91c3d58aa18de477ad124b14dfd806c65018a8","after":null,"ref":"refs/heads/auto_remove_container","pushedAt":"2024-08-14T07:44:57.000Z","pushType":"branch_deletion","commitsCount":0,"pusher":{"login":"schaefi","name":"Marcus Schäfer","path":"/schaefi","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/912234?s=80&v=4"}},{"before":"e28fbe6572775f90b18fc8eab595e5c6963ca07a","after":"e6156309b5daf8af7aa8123518de2856ec6dd8a7","ref":"refs/heads/main","pushedAt":"2024-08-14T07:44:55.000Z","pushType":"pr_merge","commitsCount":2,"pusher":{"login":"schaefi","name":"Marcus Schäfer","path":"/schaefi","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/912234?s=80&v=4"},"commit":{"message":"Merge pull request #24 from SUSE-Enceladus/auto_remove_container\n\nAuto remove container instance","shortMessageHtmlLink":"Merge pull request #24 from SUSE-Enceladus/auto_remove_container"}}],"hasNextPage":true,"hasPreviousPage":false,"activityType":"all","actor":null,"timePeriod":"all","sort":"DESC","perPage":30,"cursor":"Y3Vyc29yOnYyOpK7MjAyNC0wOS0xNlQxNzo0Mjo1NS4wMDAwMDBazwAAAAS3t_GP","startCursor":"Y3Vyc29yOnYyOpK7MjAyNC0wOS0xNlQxNzo0Mjo1NS4wMDAwMDBazwAAAAS3t_GP","endCursor":"Y3Vyc29yOnYyOpK7MjAyNC0wOC0xNFQwNzo0NDo1NS4wMDAwMDBazwAAAASZ938b"}},"title":"Activity · SUSE-Enceladus/cgyle"}