Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Windows | Expected Throughput #643

Open
Aderinom opened this issue Dec 21, 2021 · 4 comments
Open

Windows | Expected Throughput #643

Aderinom opened this issue Dec 21, 2021 · 4 comments

Comments

@Aderinom
Copy link

Aderinom commented Dec 21, 2021

I am trying to evaluate if this library can be used for a Project.

To get the best throughput results i've tried different ways to implement the Lib, using the no threaded init, normal init etc.

Now with the normal threaded implementation, trying a few different settings this is my current throughput.
I've checked and the core running at max load, analyzing the cpu time my code also doesn't take away too much time.

So - question. Is this kind of expected on Windows or are there maybe some default settings which are known to increase performance?

Results:

OS: Windows 10
CPU : Intel i7-10700K CPU @ 3.8GHz
Using default sctp_init without callback api
Sender is in virtual machine. Network is over HyperV V-Switch
Using "clumsy" as network fuzzer
Current commit : f7368e6
Application was build without optimization

Type PcktSize Reliability mbyte/s pkt/s
OrderedLosless 64000 100.00% 51.52 805
OrderedLosless 32000 100.00% 51.32 1604
OrderedLosless 16000 100.00% 53.33 3333
OrderedLosless 500 100.00% 26.34 52676
UnorderedLossles 64000 100.00% 50.38 787
UnorderedLossles 32000 100.00% 49.47 1546
UnorderedLossles 16000 100.00% 47.67 2979
UnorderedLossles 8000 100.00% 48.48 6061
UnorderedRTXLoss 16000 100.00% 44.06 2754
UnorderedTTLLoss 16000 100.00% 43.39 2712

When simulating a not perfect network i get the following result
(Take this with a grain of salt - the fuzzing tool seems to have some problems)

Type PcktSize Reliability mbyte/s pkt/s Lag (IN/OUT) Drop Out of Order Tamper
OrderedLosless 16000 100.00% 42.14 2634        
OrderedLosless 16000 100.00% 2.96 185 10ms 60-100ms      
OrderedLosless 16000 100.00% 10.73 671   5%    
OrderedLosless 16000 100.00% 36.70 2294     50%  
OrderedLosless 16000 100.00% 5.98 374       5%
                 
UnorderedRTXLossy 16000 97.00% 8.05 503   5%    
UnorderedTTLLoss 16000 55.00% 3.30 206   5%    
UnorderedTTLLoss 74313 0.98 10.50 656 5ms      
@tuexen
Copy link
Member

tuexen commented Dec 21, 2021

Did you increase the send/recv socket buffer sizes? If not, does doing so improve things?

@Aderinom
Copy link
Author

I had them increased, Yes.

I however I only build the application in Debug mode. Building it in release with compiler optimization the throughput is doubled... So, that's something I should have thought about.

Also, the Lag test is invalid, the tool I'm using is creating much higher Latency then it shows.
I assume the drop test is also incorrect.

However now I get very weird results for the UnorderedTTLLossy packet (SCTP_PR_SCTP_TTL+ 50 ms), haven't really changed anything there thought. Also I noticed they generate quite a lot of backtraffic - sending 80mbit & receiving 40-80mbit?

New values

Type PcktSize Reliability mbyte/s pkt/s  
OrderedLosless 64000 1 95.04 1485  
OrderedLosless 32000 1 87.52 2735  
OrderedLosless 16000 1 89.98 5624  
OrderedLosless 8000 1 85.76 10720  
OrderedLosless 4000 1 78.86 19716  
OrderedLosless 500 1 31.75 63492  
UnorderedRTXLoss 16000 0.99 80.05 5003  
UnorderedTTLLoss 16000 0.75 5.68 355  
           
OrderedLosless 16000 1 89.98 5624  
OrderedLosless 16000 1 3.15 197 10-40ms latency
OrderedLosless 16000 1 14.80 925 2.5% drop chance in/out
OrderedLosless 16000 1 33.71 2107 50% out of order
OrderedLosless 16000 1 21.47 1342 2.5% tamper chance in/out
           
UnorderedRTXLoss 16000 0.74 14.09 881 2.5% drop chance in/out
UnorderedTTLLoss 16000 0.74 3.03 190 2.5% drop chance in/out

I still think I'm doing something wrong...
Other Question though - tscp/tscp_upcall should work like:
server : tsctp_upcall.exe
client : tsctp_upcall.exe <server-ip>
Correct?

@tuexen
Copy link
Member

tuexen commented Dec 21, 2021

I had them increased, Yes.

How much?

I however I only build the application in Debug mode. Building it in release with compiler optimization the throughput is doubled... So, that's something I should have thought about.
Yes adding debug support requires some cycles...

Also, the Lag test is invalid, the tool I'm using is creating much higher Latency then it shows. I assume the drop test is also incorrect.

However now I get very weird results for the UnorderedTTLLossy packet (SCTP_PR_SCTP_TTL+ 50 ms), haven't really changed anything there thought. Also I noticed they generate quite a lot of backtraffic - sending 80mbit & receiving 40-80mbit?

You can capture the traffic using wireshark and take a look or provide the capture file...

New values

Type PcktSize Reliability mbyte/s pkt/s  
OrderedLosless 64000 1 95.04 1485  
OrderedLosless 32000 1 87.52 2735  
OrderedLosless 16000 1 89.98 5624  
OrderedLosless 8000 1 85.76 10720  
OrderedLosless 4000 1 78.86 19716  
OrderedLosless 500 1 31.75 63492  
UnorderedRTXLoss 16000 0.99 80.05 5003  
UnorderedTTLLoss 16000 0.75 5.68 355  
           
OrderedLosless 16000 1 89.98 5624  
OrderedLosless 16000 1 3.15 197 10-40ms latency
OrderedLosless 16000 1 14.80 925 2.5% drop chance in/out
OrderedLosless 16000 1 33.71 2107 50% out of order
OrderedLosless 16000 1 21.47 1342 2.5% tamper chance in/out
           
UnorderedRTXLoss 16000 0.74 14.09 881 2.5% drop chance in/out
UnorderedTTLLoss 16000 0.74 3.03 190 2.5% drop chance in/out

If you are not using loss and delay. What is limiting the transmission? The CPU load?

I still think I'm doing something wrong... Other Question though - tscp/tscp_upcall should work like: server : tsctp_upcall.exe client : tsctp_upcall.exe <server-ip> Correct?

I think so... But I have no experience with Windows...

@Aderinom
Copy link
Author

Aderinom commented Dec 21, 2021

How much?

Way too huge honestly, 256mb for both. Wanted to see if it makes a noticeable difference since a lot of changes I tried just resulted in the same result.

You can capture the traffic using wireshark and take a look or provide the capture file...

Will take a look tomorrow. I was just suprized to see that and thought maybe that's something which is to be expected. I personally don't really need the TTL reliability mode anyways.

If you are not using loss and delay. What is limiting the transmission? The CPU load?

You mean when using OrderedLossless? Yes It's the CPU on the sending side. With a older version of the testcode the Server could handle ~ 1.6x the Load one sender could send. Might be different now ofc.

Question - could you give me a very rought hint on how much throughput you would expect?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants