New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Fix the estimated size by adding the size of RPMs (yum cache) #874
Conversation
Pull Request Test Coverage Report for Build 2225
💛 - Coveralls |
anaconda moves the yum cache to the disk once it is partitioned, so the downloaded rpms also consume space until the installation is finished. Take this into account when estimating the size. Related: rhbz#1761337
df3de2e
to
40e529d
Compare
(continuing the bz conversation here). You said using estimate_size() resulted in doubling your rootfs? Here's what I get when I run it: It adds about 400M to the size. |
Again, I agree. But on my both test VM, I confirm the metadata_size is about 2 GB. I'm not sure that I didn't break something, so I will make a new try. Currently, I have:
But only 400M in the corresponding yum cache:
EDIT: Maybe using cdn.redhat.com (which is slow for me) does not lead to this problem. Finally, we don't care about my test case, it will work anyway with your patch. EDIT2:
As said previously, I don't understand why the metadata downloaded by lorax are so large, there is maybe another issue here. But I'm going to come back to your first recommandation, that is to say including your commit from #875. Anyway, thanks for your time on this. |
anaconda also writes the repo metadata to the disk, so take that into account when estimating the required size. We do this by using the size of lorax-composer's copy of the metadata which was used to depsolve the blueprint. Resolves: rhbz#1761337
Thanks for taking another look at this, we have no control over how much metadata is downloaded and depending on the repository being used it may be different than what I am using. When you say 'After a few time, the disk usage of this directory is 2 GB.' do you mean it is smaller the first time?
With the repos I have setup here I am not seeing a significant increase after starting the service. |
No, I was just talking about the time needed to download 2GB.
Yep, already done this several time.
2 GB. I finally tested with cdn.redhat.com (with only the base repo) and I got the same size.
Done, no problem, that doesn't change the size of the cache.
I don't understand how you can get a so small cache on your system. In fact, a cache of 400M is the one I have in the standard /var/cache/yum and that's why I don't understand why the cache in /var/tmp/composer is so large. |
Ok, good, I would have been more puzzled :)
I don't use the cdn or a subscription, I am using a local mirror of internal releases. So the new question is, does the installation image really need that 2G of extra space? If you feel like experimenting, you could calculate the extra metadata size and still display it, but add a constant instead as an experiment, say 500M, to see if that's enough. If it still works then I'm not sure this is a valid way to calculate needed extra space. |
I can confirm this is probably not the right way. In my first attempt, when I only added the size of all RPM packages, only a few extra MB were required (since the calculation was over-estimated by 40% for that too). That's why in my 2nd attempt, I only added the size of the metadata (4 files per repos, that's what I can see when I refresh my yum repos), and it was enough. For my test case, I have:
I don't know why some files are downloaded in the installroot (/var/tmp/composer/yum/root), but not in the "standard" yum cache (in /var/cache/yum), especially: So I think we don't need to count them. |
See PR #875 |
Related: rhbz#1761337