Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

support for Netmap in emulation mode #245

Closed
fklassen opened this issue May 6, 2016 · 16 comments
Closed

support for Netmap in emulation mode #245

fklassen opened this issue May 6, 2016 · 16 comments

Comments

@fklassen
Copy link
Member

fklassen commented May 6, 2016

As per mail list:

could someone advise how to use tcpleplay compiled with netmap and where netmap works in emulation mode - it means with nonsupported drivers ?

I set netmap in emulation mode:
echo 2 > /sys/module/netmap/parameters/admode

tcpreplay is trying to bypass data to the interface but aborts with messages: 
error: sk_under_panics
warning: ioctl error -1 35142:31 failed!
Fatal Error: failed to open device "intfname": nm_do_ioctl: Operation not supported
@PavelBirulin
Copy link

Fred, thanks!

you are writing:

That said, I added the following enhancement request. I suggest you add yourself as a follower of this >issue. I'll have a look at what the effort its, and whether or not it makes sense to use it. With some >recent performance enhancements available in Tcpreplay 4.1.1, you may find it runs faster than >emulated Netmap.

are any options to activate these enhancements ?
Is any libpcap settings useful for this ? (as far as I understand tcpreplay uses libpcap)

@PavelBirulin
Copy link

Fred, hi, could not run tcpreplay faster than 15 MBps.

@fklassen
Copy link
Member Author

Pavel, this is a place holder issue to remind me that there is a request to support this feature. As of now, this is not supported. You will need to get a Netmap supported adapter and run in non-emulated mode.

This issue is not scheduled, so it will not make it into the next release at the end of the month.

@PavelBirulin
Copy link

Fred,

yes, thanks, I understand - this is just information for you about the speed of 4.4.1 version.

@fklassen
Copy link
Member Author

Ahhh .. misunderstood. Thanks.

@PavelBirulin
Copy link

Thanks,

btw, noticed when compiling tcpreplay with quick_tx under Raspberian:

warning: implicit declaration of function ‘rmb’ - which then leads to the linking error

this happens because of (it seems - different CPU type).

#if defined x86_64 || defined i386

define rmb() asm volatile("lfence" ::: "memory")

define wmb() asm volatile("sfence" ::: "memory")

#endif /* x86_64 */

after commenting out rmb and wmb functions in quick_tx.h I built succesfully tcpreplay with quick_tx and run it but the speed was quite slow - just hundreds pps (w/o quick_tx it gives up to 15 MBps).

@fklassen
Copy link
Member Author

quick_tx is best-effort basis right now, until I can time (or find someone with time) to fix it up. It is not and probably will never be supported on Rasberian.

@PavelBirulin
Copy link

thanks, I see commented out //#include "arm_mem_barrier.s" - make sense to try this include ?
Where can I find it (if it exists) ?

@fklassen
Copy link
Member Author

that would be in the cross compiler. It appears your cross compiler doesn't have it.

@PavelBirulin
Copy link

no, didn't find it in the disk at all. Found only this one with something about arm_memory_barrier: https://github.com/aquynh/capstone/blob/master/include/arm.h

@PavelBirulin
Copy link

Fred, I have time actually :)

until I can time (or find someone with time) to fix it up. It is not and probably will never be supported on Rasberian.

@fklassen
Copy link
Member Author

Great! If you make something nice, submit a pull request and I'll review.

As for cross compiler, that is supplied by others. Look in you gcc/build directories.

@PavelBirulin
Copy link

ok, will try, what are the current state in brief of quick_tx ? (list of known problems etc)

@PavelBirulin
Copy link

It's interesting - if using --quick-tx when I check the actual speed of sending independently on tcpreplay - I see that first 10 packets are sent very quickly - each few MICROseconds. But then the speed slows down in a few orders of magnitude. It seems first packers really goes directly to the NIC but the NIC buffer is very small

@fklassen
Copy link
Member Author

quick_tx is currently shelved, and may get yanked. I didn't do the development on it, and issues are not logged. If you want to resurrect it, feel free. Warning ... you need to have a fair knowledge of kernel programming.

@fklassen fklassen added this to TODO in Release 4.2 Feb 27, 2017
@fklassen
Copy link
Member Author

Investigation - To many topics covered in this issue. I don't see anything here that will be supported anytime soon. Closing as "won't fix"

@fklassen fklassen removed this from TODO in Release 4.2 Feb 28, 2017
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

2 participants