Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Another stability issue #31

Closed
pavel-odintsov opened this issue Aug 6, 2015 · 13 comments
Closed

Another stability issue #31

pavel-odintsov opened this issue Aug 6, 2015 · 13 comments

Comments

@pavel-odintsov
Copy link

Hello, folks!

I hit another bug with Ubuntu 14.04 with 3.16 kernel with PF_RING 6.0.3.

I could share whole messages log here: https://www.dropbox.com/s/la1kb2mcdpdo9v9/test_lab_kern_fault_customer.log?dl=0

@pavel-odintsov
Copy link
Author

It's ESXi VM with passthrough ixgbe NIC.

@lmangani
Copy link
Contributor

Could you summarize the issue, if still present?

@pavel-odintsov
Copy link
Author

Hello!

We got kernel panic with PFrING kernel module. What details do you need?

On Thursday, August 27, 2015, Lorenzo Mangani notifications@github.com
wrote:

Could you summarize the issue, if still present?


Reply to this email directly or view it on GitHub
#31 (comment).

Sincerely yours, Pavel Odintsov

@cardigliano
Copy link
Member

Are you able to reproduce the same with latest pf_ring?

Alfredo

On 26 Aug 2015, at 23:44, Pavel Odintsov notifications@github.com wrote:

Hello!

We got kernel panic with PFrING kernel module. What details do you need?

On Thursday, August 27, 2015, Lorenzo Mangani notifications@github.com
wrote:

Could you summarize the issue, if still present?


Reply to this email directly or view it on GitHub
#31 (comment).

Sincerely yours, Pavel Odintsov

Reply to this email directly or view it on GitHub #31 (comment).

@pavel-odintsov
Copy link
Author

Hello!

I haven't access to this machine now :(

@pavel-odintsov
Copy link
Author

Hello!

I got access :) Upgrade PF_RING and run stress flood for night.

@pavel-odintsov
Copy link
Author

Have checked for 2 days with wire speed flood. And hit another issue with memory allocator: https://www.dropbox.com/s/r9avrhrleqa7chq/dmesg_pfring_memory_issue?dl=0

@cardigliano
Copy link
Member

Hi Pavel
it seems you ran out of memory, please provide your configuration (driver, pf_ring module, etc)

Alfredo

On 04 Sep 2015, at 10:33, Pavel Odintsov notifications@github.com wrote:

Have checked for 2 days with wire speed flood. And hit another issue with memory allocator: https://www.dropbox.com/s/r9avrhrleqa7chq/dmesg_pfring_memory_issue?dl=0 https://www.dropbox.com/s/r9avrhrleqa7chq/dmesg_pfring_memory_issue?dl=0

Reply to this email directly or view it on GitHub #31 (comment).

@pavel-odintsov
Copy link
Author

Hello!

top - 11:43:24 up 2 days, 14:24,  2 users,  load average: 1,09, 1,16, 1,21
Tasks: 148 total,   2 running, 146 sleeping,   0 stopped,   0 zombie
%Cpu(s):  5,2 us,  0,7 sy,  0,0 ni, 64,1 id,  0,0 wa,  0,0 hi, 30,1 si,  0,0 st
KiB Mem:  16042832 total,  1041204 used, 15001628 free,   213944 buffers
KiB Swap: 22734844 total,   106220 used, 22628624 free.   516164 cached Mem

App which uses PF_RING:

ps aux|grep fast
root      5141 53.3  0.0 740628  3396 ?        Sl   вер01 1926:19 /opt/fastnetmon/fastnetmon --daemonize

Driver: standard ixgbe.

PF_RING from git.

OS - Ubuntu 14.04, Kernel - 3.16:

uname -a
Linux labuser-virtual-machine 3.16.0-45-generic #60~14.04.1-Ubuntu SMP Fri Jul 24 21:16:23 UTC 2015 x86_64 x86_64 x86_64 GNU/Linux

Memory usage:

cat /proc/meminfo 
MemTotal:       16042832 kB
MemFree:        15001408 kB
MemAvailable:   15589856 kB
Buffers:          213952 kB
Cached:           516172 kB
SwapCached:        10184 kB
Active:           575148 kB
Inactive:         222992 kB
Active(anon):      44592 kB
Inactive(anon):    27736 kB
Active(file):     530556 kB
Inactive(file):   195256 kB
Unevictable:          32 kB
Mlocked:              32 kB
SwapTotal:      22734844 kB
SwapFree:       22628624 kB
Dirty:                20 kB
Writeback:             0 kB
AnonPages:         64352 kB
Mapped:            69676 kB
Shmem:              4312 kB
Slab:              90856 kB
SReclaimable:      63160 kB
SUnreclaim:        27696 kB
KernelStack:        4672 kB
PageTables:        10512 kB
NFS_Unstable:          0 kB
Bounce:                0 kB
WritebackTmp:          0 kB
CommitLimit:    30756260 kB
Committed_AS:    1475840 kB
VmallocTotal:   34359738367 kB
VmallocUsed:      177256 kB
VmallocChunk:   34359554476 kB
HardwareCorrupted:     0 kB
AnonHugePages:     12288 kB
HugePages_Total:       0
HugePages_Free:        0
HugePages_Rsvd:        0
HugePages_Surp:        0
Hugepagesize:       2048 kB
DirectMap4k:       94144 kB
DirectMap2M:    16285696 kB

@pavel-odintsov
Copy link
Author

Driver - ixgbe:

modinfo ixgbe
filename:       /lib/modules/3.16.0-45-generic/kernel/drivers/net/ethernet/intel/ixgbe/ixgbe.ko
version:        3.19.1-k
license:        GPL
description:    Intel(R) 10 Gigabit PCI Express Network Driver
author:         Intel Corporation, <linux.nics@intel.com>
srcversion:     29046EBE44112C033A8229A
alias:          pci:v00008086d00001560sv*sd*bc*sc*i*
alias:          pci:v00008086d0000154Asv*sd*bc*sc*i*
alias:          pci:v00008086d00001557sv*sd*bc*sc*i*
alias:          pci:v00008086d00001558sv*sd*bc*sc*i*
alias:          pci:v00008086d0000154Fsv*sd*bc*sc*i*
alias:          pci:v00008086d0000154Dsv*sd*bc*sc*i*
alias:          pci:v00008086d00001528sv*sd*bc*sc*i*
alias:          pci:v00008086d000010F8sv*sd*bc*sc*i*
alias:          pci:v00008086d0000151Csv*sd*bc*sc*i*
alias:          pci:v00008086d00001529sv*sd*bc*sc*i*
alias:          pci:v00008086d0000152Asv*sd*bc*sc*i*
alias:          pci:v00008086d000010F9sv*sd*bc*sc*i*
alias:          pci:v00008086d00001514sv*sd*bc*sc*i*
alias:          pci:v00008086d00001507sv*sd*bc*sc*i*
alias:          pci:v00008086d000010FBsv*sd*bc*sc*i*
alias:          pci:v00008086d00001517sv*sd*bc*sc*i*
alias:          pci:v00008086d000010FCsv*sd*bc*sc*i*
alias:          pci:v00008086d000010F7sv*sd*bc*sc*i*
alias:          pci:v00008086d00001508sv*sd*bc*sc*i*
alias:          pci:v00008086d000010DBsv*sd*bc*sc*i*
alias:          pci:v00008086d000010F4sv*sd*bc*sc*i*
alias:          pci:v00008086d000010E1sv*sd*bc*sc*i*
alias:          pci:v00008086d000010F1sv*sd*bc*sc*i*
alias:          pci:v00008086d000010ECsv*sd*bc*sc*i*
alias:          pci:v00008086d000010DDsv*sd*bc*sc*i*
alias:          pci:v00008086d0000150Bsv*sd*bc*sc*i*
alias:          pci:v00008086d000010C8sv*sd*bc*sc*i*
alias:          pci:v00008086d000010C7sv*sd*bc*sc*i*
alias:          pci:v00008086d000010C6sv*sd*bc*sc*i*
alias:          pci:v00008086d000010B6sv*sd*bc*sc*i*
depends:        mdio,ptp,dca
intree:         Y
vermagic:       3.16.0-45-generic SMP mod_unload modversions 
signer:         Magrathea: Glacier signing key
sig_key:        C1:A3:1E:DB:9F:C4:C6:4E:2D:95:A7:FF:18:A6:73:D1:8C:AB:15:A6
sig_hashalgo:   sha512
parm:           max_vfs:Maximum number of virtual functions to allocate per physical function - default is zero and maximum value is 63. (Deprecated) (uint)
parm:           allow_unsupported_sfp:Allow unsupported and untested SFP+ modules on 82599-based adapters (uint)
parm:           debug:Debug level (0=none,...,16=all) (int)

@dcode
Copy link

dcode commented Oct 21, 2015

Are there two different issues here? the first was a kernel panic right? and the second was memory exhaustion?

@pavel-odintsov
Copy link
Author

Yes, they are different.

@lucaderi
Copy link
Member

@pavel-odintsov I can't see in your reports (please next time open individual issues) why PF_RING is the cause of your issues. You are running out of memory, probably because you are flooding your machine with traffic and as you do not use ZC, you have a per-packet allocation.
Once you are sure it is PF_RING the cause of all this, please file a new bug, but do provide evidence that it is really PF_RING's fault as in your trace I think the problem is somewhere else.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

5 participants