Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

x/net/ipv4: missing a way to parse Linux packets on Darwin #43386

Open
denisvmedia opened this issue Dec 26, 2020 · 3 comments
Open

x/net/ipv4: missing a way to parse Linux packets on Darwin #43386

denisvmedia opened this issue Dec 26, 2020 · 3 comments

Comments

@denisvmedia
Copy link

@denisvmedia denisvmedia commented Dec 26, 2020

What version of Go are you using (go version)?

$ go version
go version go1.15 darwin/amd64

Does this issue reproduce with the latest release?

Yes

What did you do?

I captured TCP/IPv4 packets on Linux, saved them to files to be used in unit tests. Then I tried running unit tests on macOS and they failed. This was because of a hard-coded OS detection in ipv4.Header.Parse.

What did you expect to see?

I would expect to have a way to specify packet OS when I want to parse the packet. This feature would make the code of the library cleaner and less platform dependent. Currently, even tests are not clean (by having switch/case) and depend on the OS they are being run on.

What did you see instead?

I'm not able to use ipv4.ParseHeader with pre-saved files on a different platform.

@gopherbot gopherbot added this to the Unreleased milestone Dec 26, 2020
@mengzhuo
Copy link
Contributor

@mengzhuo mengzhuo commented Dec 28, 2020

Could you show some code example? That will be very helpful.

@shushen
Copy link

@shushen shushen commented Jan 1, 2021

@denisvmedia The ipv4.Header.Parse() function says the input b must be in the format used by a raw IP socket on the local system, see here.

The raw socket data on Linux is in network byte order (big endian) while the macOS raw socket data is already in the host byte-order or NativeEndian byte-order (which would be little-endian for Intel Macs). Therefore, I suspect you were trying to store the raw packet data on Linux (which is big endian) and parsing it on macOS (which expects little-endian).

You would likely need to deal with the byte-order before feeding the data to ipv4.Header.Parse(). One option would be always store in network byte-order and convert accordingly to match local system.

@denisvmedia
Copy link
Author

@denisvmedia denisvmedia commented Jan 2, 2021

@mengzhuo here is a minimal example: https://play.golang.org/p/BIPim30eLCN

The output on Darwin and Linux will be different: 60 on Linux and it's correct, while on Mac it's 15380, which is incorrect. That's because there is a hard-coded runtime.GOOS switch in /ipv4/header.go. Yes - like @shushen says - it's stated in the ipv4.Header.Parse() description that the input must be a raw IP socket on the local system. But that's the problem - before working with a Linux raw packet on Darwin I have to convert it. But I think it's a wrong way. I believe that instead there should be a low-level functionality that would accept not just raw packet data but also an OS where it was captured. In this case the code will behave consistently in any environment.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Linked pull requests

Successfully merging a pull request may close this issue.

None yet
5 participants