New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

netvm memory fragmention? #174

Closed
marmarek opened this Issue Mar 8, 2015 · 4 comments

Comments

Projects
None yet
2 participants
@marmarek
Member

marmarek commented Mar 8, 2015

Reported by joanna on 31 Mar 2011 19:46 UTC
Ok, so the netvm works really fine now (with our new suspend script) with one exception... when I shutdown all the vms, including the netvm, and then try to start it again, the iwlagn driver complains that it cannot allocate 'pci memory' (copy of dmesg from netvm):

[    4.806758] iwlagn: Intel(R) Wireless WiFi Link AGN driver for Linux, in-tree:d
[    4.806761] iwlagn: Copyright(c) 2003-2010 Intel Corporation
[    5.007945] 0000:00:00.0: eth0: (PCI Express:2.5GB/s:Width x1) 00:24:be:d6:4b:c8
[    5.007949] 0000:00:00.0: eth0: Intel(R) PRO/1000 Network Connection
[    5.007994] 0000:00:00.0: eth0: MAC: 9, PHY: 10, PBA No: ffffff-0ff
[    5.008761] iwlagn 0000:00:01.0: enabling device (0000 -> 0002)
[    5.008972]   alloc irq_desc for 16 on node 0
[    5.008973]   alloc kstat_irqs on node 0
[    5.009309] iwlagn 0000:00:01.0: setting latency timer to 64
[    5.009455] iwlagn 0000:00:01.0: Detected Intel Wireless WiFi Link 6000 Series 2x2 AGN REV=0x74
[    5.026404] iwlagn 0000:00:01.0: Tunable channels: 13 802.11bg, 24 802.11a channels
[    5.046329] iwlagn 0000:00:01.0: firmware: requesting iwlwifi-6000-4.ucode
[    5.055467] iwlagn 0000:00:01.0: loaded firmware version 9.221.4.1
[    5.055661] iwlagn 0000:00:01.0: failed to allocate pci memory

Note that this happens when all other VMs are still shutdown, i.e. when Xen has plenty of memory. And then we start netvm as the first one -- so why does it get fragmented memory (in terms of mfn)?

In practice this comes down to requiring rebooting of the whole system, if, at some point, the user must shutdown netvm for any reason (normally this doesn't happen though).

Migrated-From: https://wiki.qubes-os.org/ticket/174

@marmarek

This comment has been minimized.

Show comment
Hide comment
@marmarek

marmarek Mar 8, 2015

Member

Comment by rafal on 4 Apr 2011 10:11 UTC

when all other VMs are still shutdown, i.e. when Xen has plenty of memory
This is not quite true - when all VMs are shutdown, Xen has almost 0 free memory - all is assigned to dom0. And apparently, dom0 is handing out only fragmented pieces.

If you do the following, when all VMs (but dom0) are down:

  1. touch /var/run/qubes/do-not-membalance
  2. xm mem-set 0 1500
  3. start netvm, let it finish booting
  4. rm /var/run/qubes/do-not-membalance

will the iwlagn driver load ok ?

RW

Member

marmarek commented Mar 8, 2015

Comment by rafal on 4 Apr 2011 10:11 UTC

when all other VMs are still shutdown, i.e. when Xen has plenty of memory
This is not quite true - when all VMs are shutdown, Xen has almost 0 free memory - all is assigned to dom0. And apparently, dom0 is handing out only fragmented pieces.

If you do the following, when all VMs (but dom0) are down:

  1. touch /var/run/qubes/do-not-membalance
  2. xm mem-set 0 1500
  3. start netvm, let it finish booting
  4. rm /var/run/qubes/do-not-membalance

will the iwlagn driver load ok ?

RW

@marmarek

This comment has been minimized.

Show comment
Hide comment
@marmarek

marmarek Mar 8, 2015

Member

Modified by joanna on 6 Apr 2011 12:37 UTC

Member

marmarek commented Mar 8, 2015

Modified by joanna on 6 Apr 2011 12:37 UTC

@marmarek

This comment has been minimized.

Show comment
Hide comment
@marmarek

marmarek Mar 8, 2015

Member

Comment by rafal on 6 Apr 2011 12:43 UTC
Issue reproduced; the suggested workaround helps (tried with xm mem-set 0 2000, actually, which is a bit more polite for dom0).
Perhaps we could add a separate button "restart Netvm" to the manager, active for netvm, that will do the above actions (including perhaps shutting down all AppVMs).

Member

marmarek commented Mar 8, 2015

Comment by rafal on 6 Apr 2011 12:43 UTC
Issue reproduced; the suggested workaround helps (tried with xm mem-set 0 2000, actually, which is a bit more polite for dom0).
Perhaps we could add a separate button "restart Netvm" to the manager, active for netvm, that will do the above actions (including perhaps shutting down all AppVMs).

@marmarek

This comment has been minimized.

Show comment
Hide comment
@marmarek

marmarek Mar 8, 2015

Member

Comment by joanna on 28 Jul 2011 09:24 UTC
Seems like in Beta 2 dom0 kernel this is no longer a problem. Perhaps the new Dom0 kernel is handing out not-so-fragmented memory?

In any case, I think it makes no sense to do xl mem-set 0 2000 right from the start (in init.d scripts), because we might be wasting memory this way.

I think I will just close this one.

Member

marmarek commented Mar 8, 2015

Comment by joanna on 28 Jul 2011 09:24 UTC
Seems like in Beta 2 dom0 kernel this is no longer a problem. Perhaps the new Dom0 kernel is handing out not-so-fragmented memory?

In any case, I think it makes no sense to do xl mem-set 0 2000 right from the start (in init.d scripts), because we might be wasting memory this way.

I think I will just close this one.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment