Join GitHub today
GitHub is home to over 28 million developers working together to host and review code, manage projects, and build software together.
Sign upDisk space errors & issues with a Standalone VM #1191
Comments
This comment has been minimized.
Show comment
Hide comment
This comment has been minimized.
marmarek
Sep 15, 2015
Member
As you can see above, you've run out of space in /tmp. You can easily
enlarge that with mount /tmp -o remount,size=512M.
This isn't the first problem caused by too small /tmp...
Best Regards,
Marek Marczykowski-Górecki
Invisible Things Lab
A: Because it messes up the order in which people normally read text.
Q: Why is top-posting such a bad thing?
|
As you can see above, you've run out of space in /tmp. You can easily Best Regards, |
This comment has been minimized.
Show comment
Hide comment
This comment has been minimized.
bnvk
Sep 15, 2015
@marmarek right-o, that did the job and resolved the issue, thanks much
- We should probably plan to expose this in the AppVM Settings dialogue
- Perhaps take this aspect into account of "presets" for different task specific VMs
- Take into account the crashing of the Qubes VM Manager and better protect from it
bnvk
commented
Sep 15, 2015
|
@marmarek right-o, that did the job and resolved the issue, thanks much
|
This comment has been minimized.
Show comment
Hide comment
This comment has been minimized.
marmarek
Sep 15, 2015
Member
On Tue, Sep 15, 2015 at 05:36:17AM -0700, Brennan Novak wrote:
@marmarek right-o, that did the job and resolved the issue, thanks much
😄
- We should probably plan to expose this in the AppVM Settings dialogue
- Perhaps take this aspect into account of "presets" for different task specific VMs
- Take into account the crashing of the Qubes VM Manager and better protect from it
I think we simply should make that bigger by default. No user would know
what should be the proper /tmp size...
Best Regards,
Marek Marczykowski-Górecki
Invisible Things Lab
A: Because it messes up the order in which people normally read text.
Q: Why is top-posting such a bad thing?
|
On Tue, Sep 15, 2015 at 05:36:17AM -0700, Brennan Novak wrote:
I think we simply should make that bigger by default. No user would know Best Regards, |
This comment has been minimized.
Show comment
Hide comment
This comment has been minimized.
qubesuser
Sep 15, 2015
Is it really a good idea to mount a tmpfs on /tmp at all?
Ubuntu 15.04 HVM (which is what most software is tested on) doesn't seem to mount anything on /tmp by default for instance.
Qubes could put /tmp on volatile.img and make it huge (since it's sparse).
qubesuser
commented
Sep 15, 2015
|
Is it really a good idea to mount a tmpfs on /tmp at all? Ubuntu 15.04 HVM (which is what most software is tested on) doesn't seem to mount anything on /tmp by default for instance. Qubes could put /tmp on volatile.img and make it huge (since it's sparse). |
This comment has been minimized.
Show comment
Hide comment
This comment has been minimized.
marmarek
Sep 15, 2015
Member
On Tue, Sep 15, 2015 at 06:02:46AM -0700, qubesuser wrote:
Is it really a good idea to mount a tmpfs on /tmp at all?
Ask Fedora people ;)
https://fedoraproject.org/wiki/Features/tmp-on-tmpfs
Ubuntu 15.04 HVM (which is what most software is tested on) doesn't seem to mount anything on /tmp by default for instance.
Qubes could put /tmp on volatile.img and make it huge (since it's sparse).
Yes, it may be good idea to not use tmpfs for /tmp, especially when user
has not so much RAM.
The side effect would be that /tmp from TmplateVM may leak into child
VMs. Not sure if that's an issue.
Best Regards,
Marek Marczykowski-Górecki
Invisible Things Lab
A: Because it messes up the order in which people normally read text.
Q: Why is top-posting such a bad thing?
|
On Tue, Sep 15, 2015 at 06:02:46AM -0700, qubesuser wrote:
Ask Fedora people ;)
Yes, it may be good idea to not use tmpfs for /tmp, especially when user Best Regards, |
This comment has been minimized.
Show comment
Hide comment
This comment has been minimized.
bnvk
Sep 15, 2015
I think we simply should make that bigger by default. No user would know what should be the proper /tmp size...
I'm all in favor of reducing that sort of complexity wherever possible :-) I was more referring to tucking that under an advanced tab or something so as to make it more intuitively configurable for the subset of advanced / developer users.
Additionally, i'm noticing that the command you suggest above does not persist upon stopping & restarting my VM (not sure if that is expected behaviour). Even after increasing the temp limit to 1024 or 2048, I can't seem to complete my build jobs (which is probably due to poor design of these node scripts). Is there an upper limit / ratio of temp size / total RAM / etc?
bnvk
commented
Sep 15, 2015
I'm all in favor of reducing that sort of complexity wherever possible :-) I was more referring to tucking that under an advanced tab or something so as to make it more intuitively configurable for the subset of advanced / developer users. Additionally, i'm noticing that the command you suggest above does not persist upon stopping & restarting my VM (not sure if that is expected behaviour). Even after increasing the temp limit to 1024 or 2048, I can't seem to complete my build jobs (which is probably due to poor design of these node scripts). Is there an upper limit / ratio of temp size / total RAM / etc? |
This comment has been minimized.
Show comment
Hide comment
This comment has been minimized.
qubesuser
Sep 15, 2015
Yeah, it looks like not everyone switched to /var/tmp for large files...
The side effect would be that /tmp from TmplateVM may leak into child VMs. Not sure if that's an issue.
That seems to only be the case if you put in on the root filesystem, but not if you put it on a separate volatile disk, such as the one currently being used for swap and the volatile root snapshot.
In other words, change the script that creates volatile.img to create an additional partition for /tmp (16GB at least since it's the amount of RAM on common notebooks, maybe 32 or 64GB?), put a filesystem there (ext4 with journaling disabled if the metadata is not too large, otherwise find a filesystem with constant metadata size regardless of partition size) and change /etc/fstab to mount it to /tmp.
Alternatively it's possible to enlarge swap so that a large tmpfs can be swapped, but that's probably worse since it allows unlimited thrashing and a normal filesystem should be more efficient than a swapped tmpfs.
qubesuser
commented
Sep 15, 2015
|
Yeah, it looks like not everyone switched to /var/tmp for large files...
That seems to only be the case if you put in on the root filesystem, but not if you put it on a separate volatile disk, such as the one currently being used for swap and the volatile root snapshot. In other words, change the script that creates volatile.img to create an additional partition for /tmp (16GB at least since it's the amount of RAM on common notebooks, maybe 32 or 64GB?), put a filesystem there (ext4 with journaling disabled if the metadata is not too large, otherwise find a filesystem with constant metadata size regardless of partition size) and change /etc/fstab to mount it to /tmp. Alternatively it's possible to enlarge swap so that a large tmpfs can be swapped, but that's probably worse since it allows unlimited thrashing and a normal filesystem should be more efficient than a swapped tmpfs. |
This comment has been minimized.
Show comment
Hide comment
This comment has been minimized.
marmarek
Sep 15, 2015
Member
On Tue, Sep 15, 2015 at 07:20:21AM -0700, qubesuser wrote:
Yeah, it looks like not everyone switched to /var/tmp for large files...
The side effect would be that /tmp from TmplateVM may leak into child VMs. Not sure if that's an issue.
That seems to only be the case if you put in on the root filesystem, but not if you put it on a separate volatile disk, such as the one currently being used for swap and the volatile root snapshot.
In other words, change the script that creates volatile.img to create an additional partition for /tmp (16GB at least, maybe 1/2 of the total space on dom0 rootfs?), put a filesystem there (ext4 with journaling disabled if the metadata is not too large, otherwise find a filesystem with constant metadata size regardless of partition size) and change /etc/fstab to mount it to /tmp.
No, changing anything in dom0 scripts isn't justified by some special
case in one of templates. Especially adding additional disk (or
partition) is just adding complexity.
Anyway Debian template have /tmp on the same fs as / and nothing wrong
is with that. So we can do the same with Fedora.
Best Regards,
Marek Marczykowski-Górecki
Invisible Things Lab
A: Because it messes up the order in which people normally read text.
Q: Why is top-posting such a bad thing?
|
On Tue, Sep 15, 2015 at 07:20:21AM -0700, qubesuser wrote:
No, changing anything in dom0 scripts isn't justified by some special Anyway Debian template have /tmp on the same fs as / and nothing wrong Best Regards, |
This comment has been minimized.
Show comment
Hide comment
This comment has been minimized.
qubesuser
Sep 15, 2015
That would work, but it seems worse since it's not volatile for templates, and for non-templates it would be slower since writes are done on a dm-snapshot rather than directly.
The default root filesystem size also would probably need to be increased from the current 10GB.
dom0 complexity could be avoided by doing the partitioning and mkfs operations for volatile.img in the template itself.
qubesuser
commented
Sep 15, 2015
|
That would work, but it seems worse since it's not volatile for templates, and for non-templates it would be slower since writes are done on a dm-snapshot rather than directly. The default root filesystem size also would probably need to be increased from the current 10GB. dom0 complexity could be avoided by doing the partitioning and mkfs operations for volatile.img in the template itself. |
This comment has been minimized.
Show comment
Hide comment
This comment has been minimized.
marmarek
Sep 15, 2015
Member
On Tue, Sep 15, 2015 at 10:37:18AM -0700, qubesuser wrote:
That would work, but it seems worse since it's not volatile for templates, and for non-templates it would be slower since writes are done on a dm-snapshot rather than directly.
I don't think it would be significant difference. Again - this currently
works pretty well for Debian templates.
The default root filesystem size also would probably need to be increased from the current 10GB.
What for? Do you often have >2GB temp files? If that's so, you can also
configure the application to use /home/user/tmp or sth like this. I
don't think anyone is expecting /tmp to have more than 2GB (especially
on Fedora, where /tmp is normally on tmpfs, which is half of RAM by
default).
dom0 complexity could be avoided by doing the partitioning and mkfs operations for volatile.img in the template itself.
This actually could be a good idea. IMO the right place for this would
be initramfs, where dmroot device is constructed:
https://github.com/QubesOS/qubes-linux-utils/blob/master/dracut/simple/init.sh
Later it wouldn't be possible to repartition volatile.img as the
first partition would be already used.
In summary: for the current release (3.0) we would simply not mount
tmpfs on /tmp. For further releases we could consider repartitioning
volatile.img (and generally getting rid of volatile.img preparation from
dom0). Do you want to contribute some code here?
Best Regards,
Marek Marczykowski-Górecki
Invisible Things Lab
A: Because it messes up the order in which people normally read text.
Q: Why is top-posting such a bad thing?
|
On Tue, Sep 15, 2015 at 10:37:18AM -0700, qubesuser wrote:
I don't think it would be significant difference. Again - this currently
What for? Do you often have >2GB temp files? If that's so, you can also
This actually could be a good idea. IMO the right place for this would Later it wouldn't be possible to repartition volatile.img as the In summary: for the current release (3.0) we would simply not mount Best Regards, |
This comment has been minimized.
Show comment
Hide comment
This comment has been minimized.
marmarek
Sep 15, 2015
Member
On Tue, Sep 15, 2015 at 06:44:44AM -0700, Brennan Novak wrote:
Additionally, i'm noticing that the command you suggest above does not persist upon stopping & restarting my VM (not sure if that is expected behaviour). Even after increasing the temp limit to 1024 or 2048, I can't seem to complete my build jobs (which is probably due to poor design of these node scripts). Is there an upper limit / ratio of temp size / total RAM / etc?
Yes, tmpfs is stored in RAM, so the upper limit is your RAM size.
Actually you can try the solution discussed here - do not mount tmpfs on
/tmp. Take a look here (especially release notes part):
https://fedoraproject.org/wiki/Features/tmp-on-tmpfs
Start your template VM and issue:
sudo systemctl mask tmp.mount
Best Regards,
Marek Marczykowski-Górecki
Invisible Things Lab
A: Because it messes up the order in which people normally read text.
Q: Why is top-posting such a bad thing?
|
On Tue, Sep 15, 2015 at 06:44:44AM -0700, Brennan Novak wrote:
Yes, tmpfs is stored in RAM, so the upper limit is your RAM size. Start your template VM and issue: Best Regards, |
bnvk
referenced this issue
Nov 19, 2015
Open
Improve usability of manipulating various VMs disk sizes #1441
This comment has been minimized.
Show comment
Hide comment
This comment has been minimized.
|
/tmp was icreased to 1GB here: |
bnvk commentedSep 15, 2015
I am encountering a strange situation with doing some software development work (running a node.js build script to generate packages) whereby the build task fails with a node.js error
[ { [Error: ENOSPC, write] errno: 54, code: 'ENOSPC' } ] undefinedwhich is a no disk space error.Each time after running this build script and it errors out, thereafter doing a normal tab-to-complete bash command also errors out with:
At first I thought this was just that my StandaloneVM needed more allowed disk space. I increased it to 24 GB, but the error keeps persisting. Here is the print out from looking at disk space usage:
Another strange issue I notice that co-occurs with this "disk space issue" are that upon stopping & restarting the StandaloneVM, the Qubes VM manager crashes. This StandaloneVM is based on a Fedora 21 template.
Any thoughts or additional info I can provide @marmarek ?