Join GitHub today
GitHub is home to over 28 million developers working together to host and review code, manage projects, and build software together.
Sign upTransparent mechanism for migrating packages from current-testing to current #2573
Comments
marmarek
added
C: builder
P: major
labels
Jan 13, 2017
marmarek
added this to the Release 4.0 milestone
Jan 13, 2017
This comment has been minimized.
Show comment
Hide comment
This comment has been minimized.
marmarek
Jan 13, 2017
Member
If the above proposal looks good, some more details to discuss:
Technically, package for each target template is a separate package (qubes-core-agent v3.2.1 for Debian jessie is separate from qubes-core-agent v3.2.1 for Debian stretch). But I think it would be tedious to track them separately. This means if package has bug affecting one template, it will be delayed also for other templates (like bug affecting Debian jessie would prevent migrating package also for Debian stretch and Fedora 25). This is some trade-off, but I think a sensible one.
Somehow related issue - what to do if package build fails for some target templates - for example for one Fedora version - should the package be hold until build succeed for all templates, or (if work for other templates) - should allow to migrate successfully built packages?
/cc (for the whole issue) @adrelanos @andrewdavidwong @woju
|
If the above proposal looks good, some more details to discuss: Somehow related issue - what to do if package build fails for some target templates - for example for one Fedora version - should the package be hold until build succeed for all templates, or (if work for other templates) - should allow to migrate successfully built packages? /cc (for the whole issue) @adrelanos @andrewdavidwong @woju |
This comment has been minimized.
Show comment
Hide comment
This comment has been minimized.
adrelanos
Jan 13, 2017
Member
|
What about uploading packages to unstable first? So they are very
briefly tested on some dev / release manager machine?
With a very basic test. Reboot still working and connectivity still
working. Having all of this automated in CI I guess is another task for
some point in the far future, let alone using verification of the
communication with the CI, distributed CI and whatnot.
And after one day or so or after manual request, they flow to
current-testing.
Not sure this is the right ticket to discuss this, however it would be
great if Whonix packages also went through a release manger. My process
is to upload them to the developers repository, briefly test them, then
migrate them to the Whonix jessie-proposed-updates and testers
repository. Wait an undefined number of days depending on how grave and
time pressing the changes packages there are and the migrate them to the
jessie repository. To not loose track on which packages get migrated, my
apt reprepreo folder is under (local-only) git control. I'll 'git diff
./dists/jessie-proposed-updates/main/binary-arm64/Release' (or so) any
changes made to the repository metadata, before committing and uploading
them.
I would suggest this for Qubes repository metadata also. The author of
reprepro recommended against to upload the reprepro 'db' folder,
otherwise I would be happy to upload that whole '.git' folder. Perhaps
we'll gitignore the db folder (and have a private separate git folder
there) and also upload that git folder for better transparency? (The
'db' folder under git control might help to diagnose eventual bugs in
reprepro. Not a trivial piece of software.)
gpg verification on the command line is indeed hard.
https://github.com/Whonix/gpg-bash-lib
The python gpg libs that I know, also have issues in their trackers
listed which are major security issues and also otherwise could use
scrutiny.
|
This comment has been minimized.
Show comment
Hide comment
This comment has been minimized.
andrewdavidwong
Jan 13, 2017
Member
It sounds like a good proposal to me:
- There needs to be a more reliable way to migrate packages from
current-testingtocurrentin a timely fashion, and this fits the bill. - I don't like the reliance on GitHub, but as you point out, it's not worse than the current setup, and we have to start somewhere.
A nice thing here is ability to delegate release manager role to someone, without handing over package signing keys. While release manager role still comes with a great power, at least it will be more transparent and possible to audit.
This is a nice feature, especially since we're looking for a release manager (hint, hint to any trustworthy community members reading this
Somehow related issue - what to do if package build fails for some target templates - for example for one Fedora version - should the package be hold until build succeed for all templates, or (if work for other templates) - should allow to migrate successfully built packages?
I am inclined to say that successfully built packages should be allowed to migrate, but if the delay would be reasonable, the other way is also fine.
|
It sounds like a good proposal to me:
This is a nice feature, especially since we're looking for a release manager (hint, hint to any trustworthy community members reading this
I am inclined to say that successfully built packages should be allowed to migrate, but if the delay would be reasonable, the other way is also fine. |
This comment has been minimized.
Show comment
Hide comment
This comment has been minimized.
marmarek
Jan 13, 2017
Member
What about uploading packages to unstable first? So they are very briefly tested on some dev / release manager machine?
This step is done before uploading them anywhere - or even before committing stuff. Mostly using some local repository on LAN. See here: https://www.qubes-os.org/doc/development-workflow/#sending-packages-to-different-vm
But if test require someone else to run it (for example fix for some hardware-specific bug) - indeed we use unstable repository for this. There is no formal workflow on putting packages there. For example it is allowed to upload package built from non-standard branch, which will never be merged.
I would suggest this for Qubes repository metadata also. The author of reprepro recommended against to upload the reprepro 'db' folder, otherwise I would be happy to upload that whole '.git' folder. Perhaps we'll gitignore the db folder (and have a private separate git folder there) and also upload that git folder for better transparency? (The 'db' folder under git control might help to diagnose eventual bugs in reprepro. Not a trivial piece of software.)
Yes, keeping repository metadata in git sounds like a good idea. Excluding db should be trivial (as well as deb files itself - to have .git in manageable size). It will be somehow harder for Fedora, as repository metadata contains also binary files (like generated sqlite database). And for yum repository, by default only packages are signed, not metadata itself. Theoretically there is support for signed metadata, but apparently no one is using it (so, I'd expect it being buggy...). Actually, we leverage this currently, to generate metadata on the server, to not be forced to keep all the packages locally (all rpm files are needed to regenerate yum metadata - there is no equivalent of db directory).
gpg verification on the command line is indeed hard.
I think the first and most important step is to use gpgv2 instead of gpg2...
This step is done before uploading them anywhere - or even before committing stuff. Mostly using some local repository on LAN. See here: https://www.qubes-os.org/doc/development-workflow/#sending-packages-to-different-vm But if test require someone else to run it (for example fix for some hardware-specific bug) - indeed we use unstable repository for this. There is no formal workflow on putting packages there. For example it is allowed to upload package built from non-standard branch, which will never be merged.
Yes, keeping repository metadata in git sounds like a good idea. Excluding
I think the first and most important step is to use |
This comment has been minimized.
Show comment
Hide comment
This comment has been minimized.
andrewdavidwong
Jan 14, 2017
Member
I think the first and most important step is to use gpgv2 instead of gpg2...
|
This comment has been minimized.
Show comment
Hide comment
This comment has been minimized.
adrelanos
Jan 15, 2017
Member
Excluding db should be trivial (as well as deb files itself - to have .git in manageable size).
I think having also the deb folder under version control is useful. Even if just private and not published. In case something gets added to that folder by accident, then it's easy to rewind. And since the folder is private, it's easy to sometimes wipe the .git folder and re git init it to save space.
I think having also the |
marmarek commentedJan 13, 2017
Current state
This is mostly about:
Currently - as mentioned in this document, it's done from command line of build machine(s), and more importantly - without any integrated method to check if package have known bugs. The only thing enforced by the script is to prevent uploading to current if package wasn't for at least 7 days in current-testing. But then, it's for release manager to check/remember if particular package is really tested and reported to be working (or at least - not reported to be broken). As with all manual steps, this is a weak spot... It's also easy to forget about some packages in current-testing.
Goal
It would be nice to have some public (web?) service for this, but lets keep in mind assumption of untrusted infrastructure. So, for example compromise of such service should not allow any control over the repository - especially not migrating (possibly broken) packages from current-testing to current. Probably DoS could not be mitigated, but in such a case we can always revert to the command line method.
Related to #1818
Proposal
Few details about step 5, to make it reasonably secure:
I think this workflow would allow to have it reasonably secure, while also quite convenient. Especially it will be trivial to see if any package is still in current-testing, easily collect related bug reports etc. This is still prone to some attacks - for example someone who control github could hide bug reports and trick the release manager to migrate buggy package. But the same could be done at any other bug reporting phase - like attacking mail server to filter out emails, or hide issues/notifications in QubesOS/qubes-issues. So, I think it isn't worse than the current setup.
A nice thing here is ability to delegate release manager role to someone, without handing over package signing keys. While release manager role still comes with a great power, at least it will be more transparent and possible to audit.