Join GitHub today
GitHub is home to over 28 million developers working together to host and review code, manage projects, and build software together.
Sign upnetvm memory fragmention? #174
Comments
marmarek
assigned
rootkovska
Mar 8, 2015
marmarek
added this to the Release 1 Beta 1 milestone
Mar 8, 2015
marmarek
added
bug
C: xen
P: minor
labels
Mar 8, 2015
This comment has been minimized.
Show comment
Hide comment
This comment has been minimized.
marmarek
Mar 8, 2015
Member
Comment by rafal on 4 Apr 2011 10:11 UTC
when all other VMs are still shutdown, i.e. when Xen has plenty of memory
This is not quite true - when all VMs are shutdown, Xen has almost 0 free memory - all is assigned to dom0. And apparently, dom0 is handing out only fragmented pieces.
If you do the following, when all VMs (but dom0) are down:
- touch /var/run/qubes/do-not-membalance
- xm mem-set 0 1500
- start netvm, let it finish booting
- rm /var/run/qubes/do-not-membalance
will the iwlagn driver load ok ?
RW
|
Comment by rafal on 4 Apr 2011 10:11 UTC
If you do the following, when all VMs (but dom0) are down:
will the iwlagn driver load ok ? RW |
This comment has been minimized.
Show comment
Hide comment
This comment has been minimized.
|
Modified by joanna on 6 Apr 2011 12:37 UTC |
marmarek
modified the milestones:
Release 1 Beta 2,
Release 1 Beta 1
Mar 8, 2015
This comment has been minimized.
Show comment
Hide comment
This comment has been minimized.
marmarek
Mar 8, 2015
Member
Comment by rafal on 6 Apr 2011 12:43 UTC
Issue reproduced; the suggested workaround helps (tried with xm mem-set 0 2000, actually, which is a bit more polite for dom0).
Perhaps we could add a separate button "restart Netvm" to the manager, active for netvm, that will do the above actions (including perhaps shutting down all AppVMs).
|
Comment by rafal on 6 Apr 2011 12:43 UTC |
This comment has been minimized.
Show comment
Hide comment
This comment has been minimized.
marmarek
Mar 8, 2015
Member
Comment by joanna on 28 Jul 2011 09:24 UTC
Seems like in Beta 2 dom0 kernel this is no longer a problem. Perhaps the new Dom0 kernel is handing out not-so-fragmented memory?
In any case, I think it makes no sense to do xl mem-set 0 2000 right from the start (in init.d scripts), because we might be wasting memory this way.
I think I will just close this one.
|
Comment by joanna on 28 Jul 2011 09:24 UTC In any case, I think it makes no sense to do xl mem-set 0 2000 right from the start (in init.d scripts), because we might be wasting memory this way. I think I will just close this one. |
marmarek commentedMar 8, 2015
Reported by joanna on 31 Mar 2011 19:46 UTC
Ok, so the netvm works really fine now (with our new suspend script) with one exception... when I shutdown all the vms, including the netvm, and then try to start it again, the iwlagn driver complains that it cannot allocate 'pci memory' (copy of dmesg from netvm):
Note that this happens when all other VMs are still shutdown, i.e. when Xen has plenty of memory. And then we start netvm as the first one -- so why does it get fragmented memory (in terms of mfn)?
In practice this comes down to requiring rebooting of the whole system, if, at some point, the user must shutdown netvm for any reason (normally this doesn't happen though).
Migrated-From: https://wiki.qubes-os.org/ticket/174