New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Don't limit vcpus for netvms to 1 #571

Closed
marmarek opened this Issue Mar 8, 2015 · 6 comments

Comments

Projects
None yet
2 participants
@marmarek
Member

marmarek commented Mar 8, 2015

Reported by joanna on 16 May 2012 13:19 UTC
I don't see any reason why all netvms got vcpus set to 1...

Similiarly

Migrated-From: https://wiki.qubes-os.org/ticket/571

@marmarek

This comment has been minimized.

Show comment
Hide comment
@marmarek

marmarek Mar 8, 2015

Member

Comment by marmarek on 18 May 2012 20:18 UTC
If you want >1 vcpus, just set it. It caused problems with some devices (doesn't remember details), so leaving one by default is reasonable choice. Anyway >1 vcpus in netvm/proxyvm will not be used in any normal use case.

Member

marmarek commented Mar 8, 2015

Comment by marmarek on 18 May 2012 20:18 UTC
If you want >1 vcpus, just set it. It caused problems with some devices (doesn't remember details), so leaving one by default is reasonable choice. Anyway >1 vcpus in netvm/proxyvm will not be used in any normal use case.

@marmarek

This comment has been minimized.

Show comment
Hide comment
@marmarek

marmarek Mar 8, 2015

Member

Comment by joanna on 21 May 2012 08:22 UTC
When yo use local Gbps networking, what is CPU load in your netvm and firewallvm?

Member

marmarek commented Mar 8, 2015

Comment by joanna on 21 May 2012 08:22 UTC
When yo use local Gbps networking, what is CPU load in your netvm and firewallvm?

@marmarek

This comment has been minimized.

Show comment
Hide comment
@marmarek

marmarek Mar 8, 2015

Member

Comment by marmarek on 21 May 2012 12:05 UTC
In netvm about 80% (top process netback/0), in firewallvm - 60%. This was single tcp stream of ~600Mbps (iperf).

Member

marmarek commented Mar 8, 2015

Comment by marmarek on 21 May 2012 12:05 UTC
In netvm about 80% (top process netback/0), in firewallvm - 60%. This was single tcp stream of ~600Mbps (iperf).

@marmarek

This comment has been minimized.

Show comment
Hide comment
@marmarek

marmarek Mar 8, 2015

Member

Comment by joanna on 22 May 2012 09:17 UTC
80% is a lot... And how does it change when you enable 4 (all) VCPUs?

I really think we should not superficially disable all the VPCUs "just in case". Rather, all VCPUs should be assigned to all VMs by default, and only in case of some troubles, one could try to limit the number of VCPUs...

Member

marmarek commented Mar 8, 2015

Comment by joanna on 22 May 2012 09:17 UTC
80% is a lot... And how does it change when you enable 4 (all) VCPUs?

I really think we should not superficially disable all the VPCUs "just in case". Rather, all VCPUs should be assigned to all VMs by default, and only in case of some troubles, one could try to limit the number of VCPUs...

@marmarek

This comment has been minimized.

Show comment
Hide comment
@marmarek

marmarek Mar 8, 2015

Member

Comment by marmarek on 25 May 2012 13:04 UTC
With assigned more vcpus (2 on my system), there is still only one netback process, and still it uses only one vCPU.
But ok, it is still possible to change number of vcpus assigned (in case of problems), so default can be "all". Maybe the problems (whatever it was) are now resolved in newer xen/kernel.

Member

marmarek commented Mar 8, 2015

Comment by marmarek on 25 May 2012 13:04 UTC
With assigned more vcpus (2 on my system), there is still only one netback process, and still it uses only one vCPU.
But ok, it is still possible to change number of vcpus assigned (in case of problems), so default can be "all". Maybe the problems (whatever it was) are now resolved in newer xen/kernel.

@marmarek marmarek assigned marmarek and unassigned rootkovska Mar 8, 2015

@marmarek

This comment has been minimized.

Show comment
Hide comment

@marmarek marmarek closed this Mar 8, 2015

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment