-
Notifications
You must be signed in to change notification settings - Fork 379
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Deploy package repos on MRG's S3 infra #3380
Comments
I don't undestood what exactly you meant. Motivation: We want to solve problems we have currently with packagecloud repositories. More precisely, there is the one biggest problem: we have a lot of packages (tarantool itself / modules / connectors and so on × multiple supported distros × multiple versions) and so we call to prune old package versions to keep a payment within some limit. Pruning of old packages (even despite that it saves all last versions) makes the repositories unusable for many cases, because one is unable to hold a version of packages w/o saving them locally somewhere. I know projects (including internal ones!) with RPMs within a project repository; I also saw customers' mirrors of our packagecloud repository — users try to workaround the problem. I also saw questions how to install a specific version to, say, bisect some problem. We have MRG infrastucture that provides S3 compatible storage that abstract us out of most problems with supporting a reliable service. We have nice mkrepo tool that generates and updates yum and apt repositories. It support S3 compatible storages. Our documentation shows download.tarantool.org URLs that can be switched to another backend almost seamlessly. The only problem here is that packagecloud does not give us private repository keys, so we need to either install our new keys within tarantool package for some time period (and only then switch) or leave instructions how to update keys manually after we'll switch download.tarantool.org repos (see #3736). We discussed a bit how to achieve the goal with @avtikhon. He looking for other tools instead of mkrepo, but I don't see any reason to do so. Hope he'll clarify his points here. I'll show mine points below as answers for questions that were arisen around this task in near days. From my side, strengths of mkrepo:
Pros of creating our own repositories instead of mirroring packagecloud ones:
I see several cons of proposed using of reprepro (and see no pros):
Alexander (@avtikhon) also met several small problems with mkrepo. We'll resolve them; it should not affect our decision. If there is suspect that mkrepo is not enough qualitative, then it should be said explicitly. At the first glance it looks much more maintainable then reprepro and it seems that it fit our needs much more better. As Gentoo user I wondered that we need to separate repositories per <major>.<minor> tarantool versions (say, 1.10, 2.1, 2.2, 2.3). As I understood, this is workaround for two problems:
Brief googling does not give solutions for those problems for apt-get and yum. It seems that package managers in most popular distros are not very flexible. So we need to provide separate repositories anyway. |
'reprepro' tool is the officially suggested tool by Debian:
Positive:
Other:
mkrepo - "Supports S3 natively as storage (doesn't require a full local copy)."
|
So, the options are:
(Note also the reprepro just doesn't fit if it supports storing only a last versions of a package — I already wrote about that above.) For me the choose is obvious. However this is not my task, so either do it as you want (but don't ask me for review) or ask Kirill (@kyukhin) to decide. |
Mounting S3 storage via FUSE eliminates the need of a local mirror. Other points remain valid. |
Actually found that s3fs really works and mounts the external S3 to the local path, the only issue found that it fails with path names including dots, which produces the 404 "not found" respond from S3. |
Added ability to store packages additionaly at MCS S3. Closes #3380
Added ability to store packages additionaly at MCS S3. Closes #3380
Added ability to store packages additionaly at MCS S3. Closes #3380
Added ability to store packages additionaly at MCS S3. Closes #3380
Added ability to store packages additionaly at MCS S3. Closes #3380
Added ability to store packages additionaly at MCS S3. The target idea was to add the new way of packages creation at MCS S3, which temporary duplicates the packaging at PackageCloud by Packpack tool. Also it was needed to pack the binaries in the native way of the each packing OS style. It were created two separate scripts, each of it for different packing style: - DEB for Ubuntu and Debian, script tools/pub_packs_s3_deb.sh - RPM for CentOS and Fedora, script tools/pub_packs_s3_rpm.sh Common parts of the scripts are: - create new meta files for the new binaries - copy new binaries to MCS S3 - get previous metafiles from MCS S3 and merge the new meta data for the new binaries - update the meta files at MCS S3 Different parts: - DEB script based on reprepro external tool which needs the prepared path with needed file structure - .gitlab.mk file updated for it, also it works separately only on OS versions level - it means that meta data it updates for all Distritbutions together. - RPM script based on createrepo external tool which doesn't need prepared path with file structure and works with the given path with binaries, also it works separately for OS/Distribution level - it means that meta data it updates for all Distributions separately. Closes #3380
Added ability to store packages additionaly at MCS S3. The target idea was to add the new way of packages creation at MCS S3, which temporary duplicates the packaging at PackageCloud by Packpack tool. Also it was needed to pack the binaries in the native way of the each packing OS style. It were created two separate scripts, each of it for different packing style: - DEB for Ubuntu and Debian, script tools/pub_packs_s3_deb.sh - RPM for CentOS and Fedora, script tools/pub_packs_s3_rpm.sh Common parts of the scripts are: - create new meta files for the new binaries - copy new binaries to MCS S3 - get previous metafiles from MCS S3 and merge the new meta data for the new binaries - update the meta files at MCS S3 Different parts: - DEB script based on reprepro external tool which needs the prepared path with needed file structure - .gitlab.mk file updated for it, also it works separately only on OS versions level - it means that meta data it updates for all Distritbutions together. - RPM script based on createrepo external tool which doesn't need prepared path with file structure and works with the given path with binaries, also it works separately for OS/Distribution level - it means that meta data it updates for all Distributions separately. Closes #3380
Added ability to store packages additionaly at MCS S3. The target idea was to add the new way of packages creation at MCS S3, which temporary duplicates the packaging at PackageCloud by Packpack tool. Also it was needed to pack the binaries in the native way of the each packing OS style. It was created standalone script for adding the packages binaries/sources to MCS S3 at DEB either RPM repositories: 'tools/add_pack_s3_repo.sh' Common parts of the script are: - create new meta files for the new binaries - copy new binaries to MCS S3 - get previous metafiles from MCS S3 and merge the new meta data for the new binaries - update the meta files at MCS S3 Different parts: - DEB script part based on reprepro external tool, also it works separately only on OS versions level - it means that meta data it updates for all Distritbutions together. - RPM script part based on createrepo external tool, also it works separately for OS/Release level - it means that meta data it updates for all Releases separately. Closes #3380
Added ability to store packages additionaly at MCS S3. The target idea was to add the new way of packages creation at MCS S3, which temporary duplicates the packaging at PackageCloud by Packpack tool. Also it was needed to pack the binaries in the native way of the each packing OS style. It was created standalone script for adding the packages binaries/sources to MCS S3 at DEB either RPM repositories: 'tools/add_pack_s3_repo.sh' Common parts of the script are: - create new meta files for the new binaries - copy new binaries to MCS S3 - get previous metafiles from MCS S3 and merge the new meta data for the new binaries - update the meta files at MCS S3 Different parts: - DEB script part based on reprepro external tool, also it works separately only on OS versions level - it means that meta data it updates for all Distritbutions together. - RPM script part based on createrepo external tool, also it works separately for OS/Release level - it means that meta data it updates for all Releases separately. Closes #3380
Added ability to store packages additionaly at MCS S3. The target idea was to add the new way of packages creation at MCS S3, which temporary duplicates the packaging at PackageCloud by Packpack tool. Also it was needed to pack the binaries in the native way of the each packing OS style. It was created standalone script for adding the packages binaries/sources to MCS S3 at DEB either RPM repositories: 'tools/add_pack_s3_repo.sh' Common parts of the script are: - create new meta files for the new binaries - copy new binaries to MCS S3 - get previous metafiles from MCS S3 and merge the new meta data for the new binaries - update the meta files at MCS S3 Different parts: - DEB script part based on reprepro external tool, also it works separately only on OS versions level - it means that meta data it updates for all Distritbutions together. - RPM script part based on createrepo external tool, also it works separately for OS/Release level - it means that meta data it updates for all Releases separately. Closes #3380
Added ability to store packages additionaly at MCS S3. The target idea was to add the new way of packages creation at MCS S3, which temporary duplicates the packaging at PackageCloud by Packpack tool. Also it was needed to pack the binaries in the native way of the each packing OS style. It was created standalone script for adding the packages binaries/sources to MCS S3 at DEB either RPM repositories: 'tools/add_pack_s3_repo.sh' Common parts of the script are: - create new meta files for the new binaries - copy new binaries to MCS S3 - get previous metafiles from MCS S3 and merge the new meta data for the new binaries - update the meta files at MCS S3 Different parts: - DEB script part based on reprepro external tool, also it works separately only on OS versions level - it means that meta data it updates for all Distritbutions together. - RPM script part based on createrepo external tool, also it works separately for OS/Release level - it means that meta data it updates for all Releases separately. Closes #3380
Added ability to store packages additionaly at MCS S3. The target idea was to add the new way of packages creation at MCS S3, which temporary duplicates the packaging at PackageCloud by Packpack tool. Also it was needed to pack the binaries in the native way of the each packing OS style. It was created standalone script for adding the packages binaries/sources to MCS S3 at DEB either RPM repositories: 'tools/add_pack_s3_repo.sh' Common parts of the script are: - create new meta files for the new binaries - copy new binaries to MCS S3 - get previous metafiles from MCS S3 and merge the new meta data for the new binaries - update the meta files at MCS S3 Different parts: - DEB script part based on reprepro external tool, also it works separately only on OS versions level - it means that meta data it updates for all Distritbutions together. - RPM script part based on createrepo external tool, also it works separately for OS/Release level - it means that meta data it updates for all Releases separately. Closes #3380
Added ability to store packages additionaly at MCS S3. The target idea was to add the new way of packages creation at MCS S3, which temporary duplicates the packaging at PackageCloud by Packpack tool. Also it was needed to pack the binaries in the native way of the each packing OS style. It was created standalone script for adding the packages binaries/sources to MCS S3 at DEB either RPM repositories: 'tools/add_pack_s3_repo.sh' Common parts of the script are: - create new meta files for the new binaries - copy new binaries to MCS S3 - get previous metafiles from MCS S3 and merge the new meta data for the new binaries - update the meta files at MCS S3 Different parts: - DEB script part based on reprepro external tool, also it works separately only on OS versions level - it means that meta data it updates for all Distritbutions together. - RPM script part based on createrepo external tool, also it works separately for OS/Release level - it means that meta data it updates for all Releases separately. Closes #3380
Added ability to store packages additionaly at MCS S3. The target idea was to add the new way of packages creation at MCS S3, which temporary duplicates the packaging at PackageCloud by Packpack tool. Also it was needed to pack the binaries in the native way of the each packing OS style. It was created standalone script for adding the packages binaries/sources to MCS S3 at DEB either RPM repositories: 'tools/add_pack_s3_repo.sh' Common parts of the script are: - create new meta files for the new binaries - copy new binaries to MCS S3 - get previous metafiles from MCS S3 and merge the new meta data for the new binaries - update the meta files at MCS S3 Different parts: - DEB script part based on reprepro external tool, also it works separately only on OS versions level - it means that meta data it updates for all Distritbutions together. - RPM script part based on createrepo external tool, also it works separately for OS/Release level - it means that meta data it updates for all Releases separately. Closes #3380
Added ability to store packages additionaly at MCS S3. The target idea was to add the new way of packages creation at MCS S3, which temporary duplicates the packaging at PackageCloud by Packpack tool. Also it was needed to pack the binaries in the native way of the each packing OS style. It was created standalone script for adding the packages binaries/sources to MCS S3 at DEB either RPM repositories: 'tools/add_pack_s3_repo.sh' Common parts of the script are: - create new meta files for the new binaries - copy new binaries to MCS S3 - get previous metafiles from MCS S3 and merge the new meta data for the new binaries - update the meta files at MCS S3 Different parts: - DEB script part based on reprepro external tool, also it works separately only on OS versions level - it means that meta data it updates for all Distritbutions together. - RPM script part based on createrepo external tool, also it works separately for OS/Release level - it means that meta data it updates for all Releases separately. Closes #3380
Added ability to store packages additionaly at MCS S3. The target idea was to add the new way of packages creation at MCS S3, which temporary duplicates the packaging at PackageCloud by Packpack tool. Also it was needed to pack the binaries in the native way of the each packing OS style. It was created standalone script for adding the packages binaries/sources to MCS S3 at DEB either RPM repositories: 'tools/add_pack_s3_repo.sh' Common parts of the script are: - create new meta files for the new binaries - copy new binaries to MCS S3 - get previous metafiles from MCS S3 and merge the new meta data for the new binaries - update the meta files at MCS S3 Different parts: - DEB script part based on reprepro external tool, also it works separately only on OS versions level - it means that meta data it updates for all Distritbutions together. - RPM script part based on createrepo external tool, also it works separately for OS/Release level - it means that meta data it updates for all Releases separately. Closes #3380
Added ability to store packages additionaly at MCS S3. The target idea was to add the new way of packages creation at MCS S3, which temporary duplicates the packaging at PackageCloud by Packpack tool. Also it was needed to pack the binaries in the native way of the each packing OS style. It was created standalone script for adding the packages binaries/sources to MCS S3 at DEB either RPM repositories: 'tools/add_pack_s3_repo.sh' Common parts of the script are: - create new meta files for the new binaries - copy new binaries to MCS S3 - get previous metafiles from MCS S3 and merge the new meta data for the new binaries - update the meta files at MCS S3 Different parts: - DEB script part based on reprepro external tool, also it works separately only on OS versions level - it means that meta data it updates for all Distritbutions together. - RPM script part based on createrepo external tool, also it works separately for OS/Release level - it means that meta data it updates for all Releases separately. Closes #3380
Added ability to store packages additionaly at MCS S3. The target idea was to add the new way of packages creation at MCS S3, which temporary duplicates the packaging at PackageCloud by Packpack tool. Also it was needed to pack the binaries in the native way of the each packing OS style. It was created standalone script for adding the packages binaries/sources to MCS S3 at DEB either RPM repositories: 'tools/add_pack_s3_repo.sh' Common parts of the script are: - create new meta files for the new binaries - copy new binaries to MCS S3 - get previous metafiles from MCS S3 and merge the new meta data for the new binaries - update the meta files at MCS S3 Different parts: - DEB script part based on reprepro external tool, also it works separately only on OS versions level - it means that meta data it updates for all Distritbutions together. - RPM script part based on createrepo external tool, also it works separately for OS/Release level - it means that meta data it updates for all Releases separately. Closes #3380
Added ability to store packages additionaly at MCS S3. The target idea was to add the new way of packages creation at MCS S3, which temporary duplicates the packaging at PackageCloud by Packpack tool. Also it was needed to pack the binaries in the native way of the each packing OS style. It was created standalone script for adding the packages binaries/sources to MCS S3 at DEB either RPM repositories: 'tools/add_pack_s3_repo.sh' Common parts of the script are: - create new meta files for the new binaries - copy new binaries to MCS S3 - get previous metafiles from MCS S3 and merge the new meta data for the new binaries - update the meta files at MCS S3 Different parts: - DEB script part based on reprepro external tool, also it works separately only on OS versions level - it means that meta data it updates for all Distritbutions together. - RPM script part based on createrepo external tool, also it works separately for OS/Release level - it means that meta data it updates for all Releases separately. Closes #3380
Added ability to store packages additionaly at MCS S3. The target idea was to add the new way of packages creation at MCS S3, which temporary duplicates the packaging at PackageCloud by Packpack tool. Also it was needed to pack the binaries in the native way of the each packing OS style. It was created standalone script for adding the packages binaries/sources to MCS S3 at DEB either RPM repositories: 'tools/add_pack_s3_repo.sh' Common parts of the script are: - create new meta files for the new binaries - copy new binaries to MCS S3 - get previous metafiles from MCS S3 and merge the new meta data for the new binaries - update the meta files at MCS S3 Different parts: - DEB script part based on reprepro external tool, also it works separately only on OS versions level - it means that meta data it updates for all Distritbutions together. - RPM script part based on createrepo external tool, also it works separately for OS/Release level - it means that meta data it updates for all Releases separately. Closes #3380
Added ability to store packages additionaly at MCS S3. The target idea was to add the new way of packages creation at MCS S3, which temporary duplicates the packaging at PackageCloud by Packpack tool. Also it was needed to pack the binaries in the native way of the each packing OS style. It was created standalone script for adding the packages binaries/sources to MCS S3 at DEB either RPM repositories: 'tools/add_pack_s3_repo.sh' Common parts of the script are: - create new meta files for the new binaries - copy new binaries to MCS S3 - get previous metafiles from MCS S3 and merge the new meta data for the new binaries - update the meta files at MCS S3 Different parts: - DEB script part based on reprepro external tool, also it works separately only on OS versions level - it means that meta data it updates for all Distritbutions together. - RPM script part based on createrepo external tool, also it works separately for OS/Release level - it means that meta data it updates for all Releases separately. Closes #3380
Created Dockerfile based script to check packages available at the S3 repository. Set this script to be run just after the package creation. Follows up #3380
Found that modules may have only binaries packages w/o sources packages. Script changed to be able to work with only binaries either sources packages. Follow-up #3380
Added cleanup functionality for the meta files. Script may have the following situations: - package files removed at S3, but it still registered: Script stores and registers the new packages at S3 and removes all the other registered blocks for the sames files in meta files. - package files already exists at S3 with the same hashes: Script passes it with warning message. - package files already exists at S3 with the old hashes: Script fails w/o force flag, otherwise it stores and registers the new packages at S3 and removes all the other registered blocks for the sames files in meta files. Added '-s|skip_errors' option flag to skip errors on changed packages to avoid of exits on script run. Follow-up #3380
Found that modules may have only binaries packages w/o sources packages. Script changed to be able to work with only binaries either sources packages. Follow-up #3380
Found that modules may have only binaries packages w/o sources packages. Script changed to be able to work with only binaries either sources packages. Follow-up #3380
Added instructions on 'product' option with examples. Follow-up #3380
Created Dockerfile based script to check packages available at the S3 repository. Set this script to be run just after the package creation. Follows up #3380
Created Dockerfile based script to check packages available at the S3 repository. Set this script to be run just after the package creation. Follows up #3380
Created Dockerfile based script to check packages available at the S3 repository. Set this script to be run just after the package creation. Follows up #3380
Created Dockerfile based script to check packages available at the S3 repository. Set this script to be run just after the package creation. Follows up #3380
Created Dockerfile based script to check packages available at the S3 repository. Set this script to be run just after the package creation. Follows up #3380
Now we have S3 based infrastructure for RPM / Deb packages and GitLab CI pipelines, which deploys packages to it. We don't plan to add 2.5+ repositories on packagecloud.io, so instead of usual change of target bucket from 2_N to 2_(N+1), the deploy stage is removed. Since all distro specific jobs are duplicated in GitLab CI pipelines and those Travis-CI jobs are needed just for deployment, it worth to remove them too. Follows up #3380. Part of #4947.
Now we have S3 based infrastructure for RPM / Deb packages and GitLab CI pipelines, which deploys packages to it. We don't plan to add 2.5+ repositories on packagecloud.io, so instead of usual change of target bucket from 2_N to 2_(N+1), the deploy stage is removed. Since all distro specific jobs are duplicated in GitLab CI pipelines and those Travis-CI jobs are needed just for deployment, it worth to remove them too. Follows up #3380. Part of #4947.
Now we have S3 based infrastructure for RPM / Deb packages and GitLab CI pipelines, which deploys packages to it. We don't plan to add 2.5+ repositories on packagecloud.io, so instead of usual change of target bucket from 2_N to 2_(N+1), the deploy stage is removed. Since all distro specific jobs are duplicated in GitLab CI pipelines and those Travis-CI jobs are needed just for deployment, it worth to remove them too. Follows up #3380. Part of #4947.
Removed obvious part in rpm spec for Travis-CI, due to it is no longer in use. ---- Comments from @Totktonada ---- This change is a kind of revertion of the commit d48406d ('test: add more tests to packaging testing'), which did close #4599. Here I described the story, why the change was made and why it is reverted now. We run testing during an RPM package build: it may catch some distribution specific problem. We had reduced quantity of tests and single thread tests execution to keep the testing stable and don't break packages build and deployment due to known fragile tests. Our CI had to use Travis CI, but we were in transition to GitLab CI to use our own machines and don't reach Travis CI limit with five jobs running in parallel. We moved package builds to GitLab CI, but kept build+deploy jobs on Travis CI for a while: GitLab CI was the new for us and we wanted to do this transition smoothly for users of our APT / YUM repositories. After enabling packages building on GitLab CI, we wanted to enable more tests (to catch more problems) and wanted to enable parallel execution of tests to speed up testing (and reduce amount of time a developer wait for results). We observed that if we'll enable more tests and parallel execution on Travis CI, the testing results will become much less stable and so we'll often have holes in deployed packages and red CI. So, we decided to keep the old way testing on Travis CI and perform all changes (more tests, more parallelism) only for GitLab CI. We had a guess that we have enough machine resources and will able to do some load balancing to overcome flaky fails on our own machines, but in fact we picked up another approach later (see below). That's all story behind #4599. What changes from those days? We moved deployment jobs to GitLab CI[^1] and now we completely disabled Travis CI (see #4410 and #4894). All jobs were moved either to GitLab CI or right to GitHub Actions[^2]. We revisited our approach to improve stability of testing. Attemps to do some load balancing together with attempts to keep not-so-large execution time were failed. We should increase parallelism for speed, but decrease it for stability at the same time. There is no optimal balance. So we decided to track flaky fails in the issue tracker and restart a test after a known fail (see details in [1]). This way we don't need to exclude tests and disable parallelism in order to get the stable and fast testing[^3]. At least in theory. We're on the way to verify this guess, but hopefully we'll stick with some adequate defaults that will work everywhere[^4]. To sum up, there are several reasons to remove the old workaround, which was implemented in the scope of #4599: no Travis CI, no foreseeable reasons to exclude tests and reduce parallelism depending on a CI provider. Footnotes: [^1]: This is simplification. Travis CI deployment jobs were not moved as is. GitLab CI jobs push packages to the new repositories backend (#3380). Travis CI jobs were disabled later (as part of #4947), after proofs that the new infrastructure works fine. However this is the another story. [^2]: Now we're going to use GitHub Actions for all jobs, mainly because GitLab CI is poorly integrated with GitHub pull requests (when source branch is in a forked repository). [^3]: Some work toward this direction still to be done: First, 'replication' test suite still excluded from the testing under RPM package build. It seems, we should just enable it back, it is tracked by #4798. Second, there is the issue [2] to get rid of ancient traces of the old attempts to keep the testing stable (from test-run side). It'll give us more parallelism in testing. [^4]: Of course, we perform investigations of flaky fails and fix code and testing problems it feeds to us. However it appears to be the long activity. References: [1]: tarantool/test-run#217 [2]: https://github.com/tarantool/test-run/issues/251
Removed obvious part in rpm spec for Travis-CI, due to it is no longer in use. ---- Comments from @Totktonada ---- This change is a kind of revertion of the commit d48406d ('test: add more tests to packaging testing'), which did close #4599. Here I described the story, why the change was made and why it is reverted now. We run testing during an RPM package build: it may catch some distribution specific problem. We had reduced quantity of tests and single thread tests execution to keep the testing stable and don't break packages build and deployment due to known fragile tests. Our CI had to use Travis CI, but we were in transition to GitLab CI to use our own machines and don't reach Travis CI limit with five jobs running in parallel. We moved package builds to GitLab CI, but kept build+deploy jobs on Travis CI for a while: GitLab CI was the new for us and we wanted to do this transition smoothly for users of our APT / YUM repositories. After enabling packages building on GitLab CI, we wanted to enable more tests (to catch more problems) and wanted to enable parallel execution of tests to speed up testing (and reduce amount of time a developer wait for results). We observed that if we'll enable more tests and parallel execution on Travis CI, the testing results will become much less stable and so we'll often have holes in deployed packages and red CI. So, we decided to keep the old way testing on Travis CI and perform all changes (more tests, more parallelism) only for GitLab CI. We had a guess that we have enough machine resources and will able to do some load balancing to overcome flaky fails on our own machines, but in fact we picked up another approach later (see below). That's all story behind #4599. What changes from those days? We moved deployment jobs to GitLab CI[^1] and now we completely disabled Travis CI (see #4410 and #4894). All jobs were moved either to GitLab CI or right to GitHub Actions[^2]. We revisited our approach to improve stability of testing. Attemps to do some load balancing together with attempts to keep not-so-large execution time were failed. We should increase parallelism for speed, but decrease it for stability at the same time. There is no optimal balance. So we decided to track flaky fails in the issue tracker and restart a test after a known fail (see details in [1]). This way we don't need to exclude tests and disable parallelism in order to get the stable and fast testing[^3]. At least in theory. We're on the way to verify this guess, but hopefully we'll stick with some adequate defaults that will work everywhere[^4]. To sum up, there are several reasons to remove the old workaround, which was implemented in the scope of #4599: no Travis CI, no foreseeable reasons to exclude tests and reduce parallelism depending on a CI provider. Footnotes: [^1]: This is simplification. Travis CI deployment jobs were not moved as is. GitLab CI jobs push packages to the new repositories backend (#3380). Travis CI jobs were disabled later (as part of #4947), after proofs that the new infrastructure works fine. However this is the another story. [^2]: Now we're going to use GitHub Actions for all jobs, mainly because GitLab CI is poorly integrated with GitHub pull requests (when source branch is in a forked repository). [^3]: Some work toward this direction still to be done: First, 'replication' test suite still excluded from the testing under RPM package build. It seems, we should just enable it back, it is tracked by #4798. Second, there is the issue [2] to get rid of ancient traces of the old attempts to keep the testing stable (from test-run side). It'll give us more parallelism in testing. [^4]: Of course, we perform investigations of flaky fails and fix code and testing problems it feeds to us. However it appears to be the long activity. References: [1]: tarantool/test-run#217 [2]: https://github.com/tarantool/test-run/issues/251 (cherry picked from commit d9c25b7)
Removed obvious part in rpm spec for Travis-CI, due to it is no longer in use. ---- Comments from @Totktonada ---- This change is a kind of revertion of the commit d48406d ('test: add more tests to packaging testing'), which did close #4599. Here I described the story, why the change was made and why it is reverted now. We run testing during an RPM package build: it may catch some distribution specific problem. We had reduced quantity of tests and single thread tests execution to keep the testing stable and don't break packages build and deployment due to known fragile tests. Our CI had to use Travis CI, but we were in transition to GitLab CI to use our own machines and don't reach Travis CI limit with five jobs running in parallel. We moved package builds to GitLab CI, but kept build+deploy jobs on Travis CI for a while: GitLab CI was the new for us and we wanted to do this transition smoothly for users of our APT / YUM repositories. After enabling packages building on GitLab CI, we wanted to enable more tests (to catch more problems) and wanted to enable parallel execution of tests to speed up testing (and reduce amount of time a developer wait for results). We observed that if we'll enable more tests and parallel execution on Travis CI, the testing results will become much less stable and so we'll often have holes in deployed packages and red CI. So, we decided to keep the old way testing on Travis CI and perform all changes (more tests, more parallelism) only for GitLab CI. We had a guess that we have enough machine resources and will able to do some load balancing to overcome flaky fails on our own machines, but in fact we picked up another approach later (see below). That's all story behind #4599. What changes from those days? We moved deployment jobs to GitLab CI[^1] and now we completely disabled Travis CI (see #4410 and #4894). All jobs were moved either to GitLab CI or right to GitHub Actions[^2]. We revisited our approach to improve stability of testing. Attemps to do some load balancing together with attempts to keep not-so-large execution time were failed. We should increase parallelism for speed, but decrease it for stability at the same time. There is no optimal balance. So we decided to track flaky fails in the issue tracker and restart a test after a known fail (see details in [1]). This way we don't need to exclude tests and disable parallelism in order to get the stable and fast testing[^3]. At least in theory. We're on the way to verify this guess, but hopefully we'll stick with some adequate defaults that will work everywhere[^4]. To sum up, there are several reasons to remove the old workaround, which was implemented in the scope of #4599: no Travis CI, no foreseeable reasons to exclude tests and reduce parallelism depending on a CI provider. Footnotes: [^1]: This is simplification. Travis CI deployment jobs were not moved as is. GitLab CI jobs push packages to the new repositories backend (#3380). Travis CI jobs were disabled later (as part of #4947), after proofs that the new infrastructure works fine. However this is the another story. [^2]: Now we're going to use GitHub Actions for all jobs, mainly because GitLab CI is poorly integrated with GitHub pull requests (when source branch is in a forked repository). [^3]: Some work toward this direction still to be done: First, 'replication' test suite still excluded from the testing under RPM package build. It seems, we should just enable it back, it is tracked by #4798. Second, there is the issue [2] to get rid of ancient traces of the old attempts to keep the testing stable (from test-run side). It'll give us more parallelism in testing. [^4]: Of course, we perform investigations of flaky fails and fix code and testing problems it feeds to us. However it appears to be the long activity. References: [1]: tarantool/test-run#217 [2]: https://github.com/tarantool/test-run/issues/251 (cherry picked from commit d9c25b7)
Removed obvious part in rpm spec for Travis-CI, due to it is no longer in use. ---- Comments from @Totktonada ---- This change is a kind of revertion of the commit d48406d ('test: add more tests to packaging testing'), which did close #4599. Here I described the story, why the change was made and why it is reverted now. We run testing during an RPM package build: it may catch some distribution specific problem. We had reduced quantity of tests and single thread tests execution to keep the testing stable and don't break packages build and deployment due to known fragile tests. Our CI had to use Travis CI, but we were in transition to GitLab CI to use our own machines and don't reach Travis CI limit with five jobs running in parallel. We moved package builds to GitLab CI, but kept build+deploy jobs on Travis CI for a while: GitLab CI was the new for us and we wanted to do this transition smoothly for users of our APT / YUM repositories. After enabling packages building on GitLab CI, we wanted to enable more tests (to catch more problems) and wanted to enable parallel execution of tests to speed up testing (and reduce amount of time a developer wait for results). We observed that if we'll enable more tests and parallel execution on Travis CI, the testing results will become much less stable and so we'll often have holes in deployed packages and red CI. So, we decided to keep the old way testing on Travis CI and perform all changes (more tests, more parallelism) only for GitLab CI. We had a guess that we have enough machine resources and will able to do some load balancing to overcome flaky fails on our own machines, but in fact we picked up another approach later (see below). That's all story behind #4599. What changes from those days? We moved deployment jobs to GitLab CI[^1] and now we completely disabled Travis CI (see #4410 and #4894). All jobs were moved either to GitLab CI or right to GitHub Actions[^2]. We revisited our approach to improve stability of testing. Attemps to do some load balancing together with attempts to keep not-so-large execution time were failed. We should increase parallelism for speed, but decrease it for stability at the same time. There is no optimal balance. So we decided to track flaky fails in the issue tracker and restart a test after a known fail (see details in [1]). This way we don't need to exclude tests and disable parallelism in order to get the stable and fast testing[^3]. At least in theory. We're on the way to verify this guess, but hopefully we'll stick with some adequate defaults that will work everywhere[^4]. To sum up, there are several reasons to remove the old workaround, which was implemented in the scope of #4599: no Travis CI, no foreseeable reasons to exclude tests and reduce parallelism depending on a CI provider. Footnotes: [^1]: This is simplification. Travis CI deployment jobs were not moved as is. GitLab CI jobs push packages to the new repositories backend (#3380). Travis CI jobs were disabled later (as part of #4947), after proofs that the new infrastructure works fine. However this is the another story. [^2]: Now we're going to use GitHub Actions for all jobs, mainly because GitLab CI is poorly integrated with GitHub pull requests (when source branch is in a forked repository). [^3]: Some work toward this direction still to be done: First, 'replication' test suite still excluded from the testing under RPM package build. It seems, we should just enable it back, it is tracked by #4798. Second, there is the issue [2] to get rid of ancient traces of the old attempts to keep the testing stable (from test-run side). It'll give us more parallelism in testing. [^4]: Of course, we perform investigations of flaky fails and fix code and testing problems it feeds to us. However it appears to be the long activity. References: [1]: tarantool/test-run#217 [2]: https://github.com/tarantool/test-run/issues/251 (cherry picked from commit d9c25b7)
In short: deployment jobs often fail now and we'll recreate them on GitHub Actions in a future. In details: * The old Travis CI infrastructure (.org) will gone soon, the new one (.com) has tight limits on a free plan. * We already have testing on GitHub Actions (but without RPM / Deb uploads). * Pulls from Docker Hub (part of RPM / Deb build + deploy jobs) often ratelimited when performed from Travis CI: likely due to reusing of IP addresses. GitHub Actions is not affected by this problem. * We don't use packagecloud.io for hosting tarantool repositories anymore (see [1], [2], [3]). We'll deploy packages to our new infra in a future: it is tracked by #43. [1]: tarantool/tarantool#3380 [2]: tarantool/tarantool#5494 [3]: tarantool/tarantool#4947 Removed unused Jenkinsfile as well.
In short: deployment jobs often fail now and we'll recreate them on GitHub Actions in a future. In details: * The old Travis CI infrastructure (.org) will gone soon, the new one (.com) has tight limits on a free plan. * We already have testing on GitHub Actions (but without RPM / Deb uploads). * Pulls from Docker Hub (part of RPM / Deb build + deploy jobs) often ratelimited when performed from Travis CI: likely due to reusing of IP addresses. GitHub Actions is not affected by this problem. * We don't use packagecloud.io for hosting tarantool repositories anymore (see [1], [2], [3]). We'll deploy packages to our new infra in a future: it is tracked by #43. [1]: tarantool/tarantool#3380 [2]: tarantool/tarantool#5494 [3]: tarantool/tarantool#4947 Removed unused Jenkinsfile as well.
In short: deployment jobs often fail now and we'll recreate them on GitHub Actions in a future. In details: * The old Travis CI infrastructure (.org) will gone soon, the new one (.com) has tight limits on a free plan. * We already have testing on GitHub Actions (but without RPM / Deb uploads). * Pulls from Docker Hub (part of RPM / Deb build + deploy jobs) often ratelimited when performed from Travis CI: likely due to reusing of IP addresses. GitHub Actions is not affected by this problem. * We don't use packagecloud.io for hosting tarantool repositories anymore (see [1], [2], [3]). We'll deploy packages to our new infra in a future: it is tracked by #43. [1]: tarantool/tarantool#3380 [2]: tarantool/tarantool#5494 [3]: tarantool/tarantool#4947 Removed unused Jenkinsfile as well.
In short: deployment jobs often fail now and we'll recreate them on GitHub Actions in a future. In details: * The old Travis CI infrastructure (.org) will gone soon, the new one (.com) has tight limits on a free plan. * We already have testing on GitHub Actions (but without RPM / Deb uploads). * Pulls from Docker Hub (part of RPM / Deb build + deploy jobs) often ratelimited when performed from Travis CI: likely due to reusing of IP addresses. GitHub Actions is not affected by this problem. * We don't use packagecloud.io for hosting tarantool repositories anymore (see [1], [2], [3]). We'll deploy packages to our new infra in a future: it is tracked by tarantool#43. [1]: tarantool/tarantool#3380 [2]: tarantool/tarantool#5494 [3]: tarantool/tarantool#4947 Removed unused Jenkinsfile as well.
The deployment should support mirroring.
The text was updated successfully, but these errors were encountered: