Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

selftests: connect: "poll timed out" issue #192

Closed
matttbe opened this issue May 13, 2021 · 10 comments
Closed

selftests: connect: "poll timed out" issue #192

matttbe opened this issue May 13, 2021 · 10 comments
Assignees
Labels
Projects

Comments

@matttbe
Copy link
Member

matttbe commented May 13, 2021

After 10 attempts, when trying to repro #172 , I got this error:

+ ./mptcp_connect.sh -l 0.99% -d 4 -r '94% 99%'
[    5.179050] ip (163) used greatest stack depth: 12208 bytes left
[    5.377081] ip (169) used greatest stack depth: 12160 bytes left
[    5.412652] ip (170) used greatest stack depth: 12096 bytes left
[    5.712754] IPv6: ADDRCONF(NETDEV_CHANGE): ns1eth2: link becomes ready
[    6.039750] IPv6: ADDRCONF(NETDEV_CHANGE): ns2eth3: link becomes ready
[    6.350804] IPv6: ADDRCONF(NETDEV_CHANGE): ns3eth4: link becomes ready
[    6.410865] IPv6: ADDRCONF(NETDEV_CHANGE): ns2eth1: link becomes ready
# INFO: set ns3-609d424c-QKZygJ dev ns3eth2: ethtool -K  gso off gro off
# INFO: set ns4-609d424c-QKZygJ dev ns4eth3: ethtool -K  gro off
# Created /tmp/tmp.aQDAyu3nPM (size 466972      /tmp/tmp.aQDAyu3nPM) containing data sent by client
# Created /tmp/tmp.0zrgihCS8q (size 7264284     /tmp/tmp.0zrgihCS8q) containing data sent by server
# New MPTCP socket can be blocked via sysctl            [ OK ]
# setsockopt(..., TCP_ULP, "mptcp", ...) blocked        [ OK ]
# INFO: validating network environment with pings
[   10.671291] netem: version 1.3
# INFO: Using loss of 0.99% delay 4 ms reorder 94% 99% with delay 1ms on ns3eth4
# ns1 MPTCP -> ns1 (10.0.1.1:10000      ) MPTCP (duration   111ms) [ OK ]
# ns1 MPTCP -> ns1 (10.0.1.1:10001      ) TCP   (duration    98ms) [ OK ]
# ns1 TCP   -> ns1 (10.0.1.1:10002      ) MPTCP (duration   105ms) [ OK ]
# ns1 MPTCP -> ns1 (dead:beef:1::1:10003) MPTCP (duration   150ms) [ OK ]
# ns1 MPTCP -> ns1 (dead:beef:1::1:10004) TCP   (duration   109ms) [ OK ]
# ns1 TCP   -> ns1 (dead:beef:1::1:10005) MPTCP (duration   106ms) [ OK ]
# ns1 MPTCP -> ns2 (10.0.1.2:10006      ) MPTCP (duration   136ms) [ OK ]
# ns1 MPTCP -> ns2 (dead:beef:1::2:10007) MPTCP (duration   124ms) [ OK ]
# ns1 MPTCP -> ns2 (10.0.2.1:10008      ) MPTCP (duration   104ms) [ OK ]
# ns1 MPTCP -> ns2 (dead:beef:2::1:10009) MPTCP (duration   125ms) [ OK ]
# ns1 MPTCP -> ns3 (10.0.2.2:10010      ) MPTCP (duration   150ms) [ OK ]
# ns1 MPTCP -> ns3 (dead:beef:2::2:10011) MPTCP (duration   146ms) [ OK ]
# ns1 MPTCP -> ns3 (10.0.3.2:10012      ) MPTCP (duration   154ms) [ OK ]
# ns1 MPTCP -> ns3 (dead:beef:3::2:10013) MPTCP (duration   162ms) [ OK ]
# ns1 MPTCP -> ns4 (10.0.3.1:10014      ) MPTCP write: Connection reset by peer
# copyfd_io_poll: poll timed out (events: POLLIN 1, POLLOUT 4)
# (duration 30147ms) [ FAIL ] client exit code 111, server 2
#
# netns ns4-609d424c-QKZygJ socket stat for 10014:
# Netid State Recv-Q Send-Q Local Address:Port Peer Address:PortProcess
# TcpPassiveOpens                 1                  0.0
# TcpEstabResets                  1                  0.0
# TcpInSegs                       27                 0.0
# TcpOutSegs                      51                 0.0
# TcpOutRsts                      12                 0.0
# TcpExtDelayedACKLost            2                  0.0
# TcpExtTCPHPHits                 2                  0.0
# TcpExtTCPPureAcks               6                  0.0
# TcpExtTCPDSACKOldSent           2                  0.0
# TcpExtTCPRcvCoalesce            1                  0.0
# TcpExtTCPOFOQueue               3                  0.0
# TcpExtTCPOrigDataSent           30                 0.0
# TcpExtTCPDelivered              10                 0.0
# MPTcpExtMPCapableSYNRX          1                  0.0
# MPTcpExtMPCapableACKRX          1                  0.0
#
# netns ns1-609d424c-QKZygJ socket stat for 10014:
# Netid State Recv-Q Send-Q Local Address:Port Peer Address:PortProcess
# TcpActiveOpens                  1                  0.0
# TcpEstabResets                  1                  0.0
# TcpInSegs                       32                 0.0
# TcpOutSegs                      43                 0.0
# TcpRetransSegs                  3                  0.0
# TcpExtTCPPureAcks               8                  0.0
# TcpExtTCPSackRecovery           1                  0.0
# TcpExtTCPFullUndo               1                  0.0
# TcpExtTCPFastRetrans            3                  0.0
# TcpExtTCPDSACKRecv              2                  0.0
# TcpExtTCPDSACKIgnoredNoUndo     2                  0.0
# TcpExtTCPSackMerged             1                  0.0
# TcpExtTCPSackShiftFallback      2                  0.0
# TcpExtTCPOrigDataSent           30                 0.0
# TcpExtTCPDelivered              19                 0.0
# TcpExtTCPDSACKRecvSegs          3                  0.0
# MPTcpExtMPCapableSYNTX          1                  0.0
# MPTcpExtMPCapableSYNACKRX       1                  0.0
#
# ns1 MPTCP -> ns4 (dead:beef:3::1:10015) MPTCP (duration   152ms) [ OK ]
# ns2 MPTCP -> ns1 (10.0.1.1:10016      ) MPTCP (duration   110ms) [ OK ]
# FAIL: Could not even run loopback test
@matttbe matttbe added the bug label May 13, 2021
@matttbe matttbe added this to Needs triage in MPTCP Bugs via automation May 13, 2021
@matttbe
Copy link
Member Author

matttbe commented May 13, 2021

Another issue, slightly different:

# INFO: Using loss of 0.99% delay 4 ms reorder 94% 99% with delay 1ms on ns3eth4
# ns1 MPTCP -> ns1 (10.0.1.1:10000      ) MPTCP (duration   166ms) [ OK ]
# ns1 MPTCP -> ns1 (10.0.1.1:10001      ) TCP   (duration   113ms) [ OK ]
# ns1 TCP   -> ns1 (10.0.1.1:10002      ) MPTCP (duration   120ms) [ OK ]
# ns1 MPTCP -> ns1 (dead:beef:1::1:10003) MPTCP (duration   152ms) [ OK ]
# ns1 MPTCP -> ns1 (dead:beef:1::1:10004) TCP   (duration    95ms) [ OK ]
# ns1 TCP   -> ns1 (dead:beef:1::1:10005) MPTCP (duration   102ms) [ OK ]
# ns1 MPTCP -> ns2 (10.0.1.2:10006      ) MPTCP (duration    99ms) [ OK ]
# ns1 MPTCP -> ns2 (dead:beef:1::2:10007) MPTCP (duration   115ms) [ OK ]
# ns1 MPTCP -> ns2 (10.0.2.1:10008      ) MPTCP (duration   128ms) [ OK ]
# ns1 MPTCP -> ns2 (dead:beef:2::1:10009) MPTCP (duration   112ms) [ OK ]
# ns1 MPTCP -> ns3 (10.0.2.2:10010      ) MPTCP (duration   816ms) [ OK ]
# ns1 MPTCP -> ns3 (dead:beef:2::2:10011) MPTCP (duration   967ms) [ OK ]
# ns1 MPTCP -> ns3 (10.0.3.2:10012      ) MPTCP (duration   169ms) [ OK ]
# ns1 MPTCP -> ns3 (dead:beef:3::2:10013) MPTCP (duration   160ms) [ OK ]
# ns1 MPTCP -> ns4 (10.0.3.1:10014      ) MPTCP Unexpected revents: POLLERR/POLLNVAL(18)
# copyfd_io_poll: poll timed out (events: POLLIN 1, POLLOUT 4)
# (duration 30139ms) [ FAIL ] client exit code 5, server 2
#
# netns ns4-609d688b-Et4Qpc socket stat for 10014:
# Netid State Recv-Q Send-Q Local Address:Port Peer Address:PortProcess
# TcpPassiveOpens                 1                  0.0
# TcpEstabResets                  1                  0.0
# TcpInSegs                       33                 0.0
# TcpOutSegs                      53                 0.0
# TcpOutRsts                      16                 0.0
# TcpExtTCPHPHits                 2                  0.0
# TcpExtTCPPureAcks               10                 0.0
# TcpExtTCPRcvCoalesce            1                  0.0
# TcpExtTCPOFOQueue               1                  0.0
# TcpExtTCPOrigDataSent           30                 0.0
# TcpExtTCPDelivered              10                 0.0
# MPTcpExtMPCapableSYNRX          1                  0.0
# MPTcpExtMPCapableACKRX          1                  0.0
#
# netns ns1-609d688b-Et4Qpc socket stat for 10014:
# Netid State Recv-Q Send-Q Local Address:Port Peer Address:PortProcess
# TcpActiveOpens                  1                  0.0
# TcpEstabResets                  1                  0.0
# TcpInSegs                       53                 0.0
# TcpOutSegs                      64                 0.0
# TcpExtTCPPureAcks               6                  0.0
# TcpExtTCPSACKReorder            3                  0.0
# TcpExtTCPBacklogCoalesce        1                  0.0
# TcpExtTCPSackShiftFallback      1                  0.0
# TcpExtTCPOrigDataSent           43                 0.0
# TcpExtTCPDelivered              18                 0.0
# MPTcpExtMPCapableSYNTX          1                  0.0
# MPTcpExtMPCapableSYNACKRX       1                  0.0
#
# ns1 MPTCP -> ns4 (dead:beef:3::1:10015) MPTCP (duration   195ms) [ OK ]
# ns2 MPTCP -> ns1 (10.0.1.1:10016      ) MPTCP (duration   107ms) [ OK ]
# FAIL: Could not even run loopback test

@matttbe
Copy link
Member Author

matttbe commented May 14, 2021

Reproduced by @pabeni after 60 attempts but not reboots in between → quicker to repro that way.

@pabeni
Copy link

pabeni commented May 14, 2021

Another issue, slightly different:

# INFO: Using loss of 0.99% delay 4 ms reorder 94% 99% with delay 1ms on ns3eth4
# ns1 MPTCP -> ns1 (10.0.1.1:10000      ) MPTCP (duration   166ms) [ OK ]
# ns1 MPTCP -> ns1 (10.0.1.1:10001      ) TCP   (duration   113ms) [ OK ]
# ns1 TCP   -> ns1 (10.0.1.1:10002      ) MPTCP (duration   120ms) [ OK ]
# ns1 MPTCP -> ns1 (dead:beef:1::1:10003) MPTCP (duration   152ms) [ OK ]
# ns1 MPTCP -> ns1 (dead:beef:1::1:10004) TCP   (duration    95ms) [ OK ]
# ns1 TCP   -> ns1 (dead:beef:1::1:10005) MPTCP (duration   102ms) [ OK ]
# ns1 MPTCP -> ns2 (10.0.1.2:10006      ) MPTCP (duration    99ms) [ OK ]
# ns1 MPTCP -> ns2 (dead:beef:1::2:10007) MPTCP (duration   115ms) [ OK ]
# ns1 MPTCP -> ns2 (10.0.2.1:10008      ) MPTCP (duration   128ms) [ OK ]
# ns1 MPTCP -> ns2 (dead:beef:2::1:10009) MPTCP (duration   112ms) [ OK ]
# ns1 MPTCP -> ns3 (10.0.2.2:10010      ) MPTCP (duration   816ms) [ OK ]
# ns1 MPTCP -> ns3 (dead:beef:2::2:10011) MPTCP (duration   967ms) [ OK ]
# ns1 MPTCP -> ns3 (10.0.3.2:10012      ) MPTCP (duration   169ms) [ OK ]
# ns1 MPTCP -> ns3 (dead:beef:3::2:10013) MPTCP (duration   160ms) [ OK ]
# ns1 MPTCP -> ns4 (10.0.3.1:10014      ) MPTCP Unexpected revents: POLLERR/POLLNVAL(18)
# copyfd_io_poll: poll timed out (events: POLLIN 1, POLLOUT 4)

Possibly the same root cause, I think we can hit the above when receiving a rst from the peer (as in the previous log), depending on the exact timing.

@pabeni
Copy link

pabeni commented May 14, 2021

From IRC:
[c] MPC ->
[s] MPC SYN ACK <-
[c] MPC ACK ->
[c] MPC ACK + data ->
(split on several segments)
(pkt drops)
(only 2# and later reach the server)
[s] TCP ACK (sack) <-
here the server generate a TCP 'sack' before the client key reaches MPTCP stack
the server don't have enough info to generate the MPTCP-level ack
in mptcp-net-next the mp ext/client key reach the mptcp part of the stack
only when the full TCP TSO is reconstructed
we could try to catch such problematic scenario in mptcp_incoming_options()
but we would need to duplicate the full TCP sequence checks on OoO pkts
(plus some additional locking to update the msk from ssk context)
the server can also include some data
with the sack
(and the DSS)
but such DSS will carry only the DSN
and no ACK

pabeni pushed a commit to pabeni/packetdrill that referenced this issue May 17, 2021
This explicitly cover the critical scenario triggering:

multipath-tcp/mptcp_net-next#192

and should fail till the kernel patch:

"mptcp: always parse mptcp options for MPC reqsk"

is merged

Signed-off-by: Paolo Abeni <pabeni@redhat.com>
pabeni pushed a commit to pabeni/packetdrill that referenced this issue May 17, 2021
This explicitly cover the critical scenario triggering:

multipath-tcp/mptcp_net-next#192

and should fail till the kernel patch:

"mptcp: always parse mptcp options for MPC reqsk"

is merged

Signed-off-by: Paolo Abeni <pabeni@redhat.com>
@matttbe
Copy link
Member Author

matttbe commented May 18, 2021

I had an issue when being on top of the export branch with these 3 patches:

66f500c3e329 (HEAD -> tmp) mptcp: update selftest for fallback due to OoO
2eda92d4479d mptcp: do not reset MP_CAPABLE subflow on mapping errors
e03f1c973fc8 mptcp: always parse mptcp options for MPC reqsk

output:

++ ./mptcp_connect.sh -l 0.99% -d 4 -r '94% 99%'
[ 1528.458121] IPv6: ADDRCONF(NETDEV_CHANGE): ns1eth2: link becomes ready
[ 1528.812222] IPv6: ADDRCONF(NETDEV_CHANGE): ns2eth3: link becomes ready
[ 1529.129177] IPv6: ADDRCONF(NETDEV_CHANGE): ns4eth3: link becomes ready
[ 1529.134732] IPv6: ADDRCONF(NETDEV_CHANGE): ns3eth4: link becomes ready
# INFO: set ns3-60a3a9bb-TUMABI dev ns3eth2: ethtool -K tso off gso off gro off
# INFO: set ns4-60a3a9bb-TUMABI dev ns4eth3: ethtool -K  gso off
# Created /tmp/tmp.wz1dahVERW (size 2228252     /tmp/tmp.wz1dahVERW) containing data sent by client
# Created /tmp/tmp.EXHY40mecP (size 8065052     /tmp/tmp.EXHY40mecP) containing data sent by server
# New MPTCP socket can be blocked via sysctl            [ OK ]
# setsockopt(..., TCP_ULP, "mptcp", ...) blocked        [ OK ]
# INFO: validating network environment with pings
# INFO: Using loss of 0.99% delay 4 ms reorder 94% 99% with delay 1ms on ns3eth4
# ns1 MPTCP -> ns1 (10.0.1.1:10000      ) MPTCP (duration   175ms) [ OK ]
# ns1 MPTCP -> ns1 (10.0.1.1:10001      ) TCP   (duration   103ms) [ OK ]
# ns1 TCP   -> ns1 (10.0.1.1:10002      ) MPTCP (duration   101ms) [ OK ]
# ns1 MPTCP -> ns1 (dead:beef:1::1:10003) MPTCP (duration   166ms) [ OK ]
# ns1 MPTCP -> ns1 (dead:beef:1::1:10004) TCP   (duration   165ms) [ OK ]
# ns1 TCP   -> ns1 (dead:beef:1::1:10005) MPTCP (duration   107ms) [ OK ]
# ns1 MPTCP -> ns2 (10.0.1.2:10006      ) MPTCP (duration   113ms) [ OK ]
# ns1 MPTCP -> ns2 (dead:beef:1::2:10007) MPTCP (duration   195ms) [ OK ]
# ns1 MPTCP -> ns2 (10.0.2.1:10008      ) MPTCP (duration   116ms) [ OK ]
# ns1 MPTCP -> ns2 (dead:beef:2::1:10009) MPTCP (duration   154ms) [ OK ]
# ns1 MPTCP -> ns3 (10.0.2.2:10010      ) MPTCP (duration   201ms) [ OK ]
# ns1 MPTCP -> ns3 (dead:beef:2::2:10011) MPTCP (duration   207ms) [ OK ]
# ns1 MPTCP -> ns3 (10.0.3.2:10012      ) MPTCP (duration   369ms) [ OK ]
# ns1 MPTCP -> ns3 (dead:beef:3::2:10013) MPTCP (duration   233ms) [ OK ]
# ns1 MPTCP -> ns4 (10.0.3.1:10014      ) MPTCP (duration   203ms) [ OK ]
# ns1 MPTCP -> ns4 (dead:beef:3::1:10015) MPTCP (duration   194ms) [ OK ]
# ns2 MPTCP -> ns1 (10.0.1.1:10016      ) MPTCP (duration   139ms) [ OK ]
# ns2 MPTCP -> ns1 (dead:beef:1::1:10017) MPTCP (duration   123ms) [ OK ]
# ns2 MPTCP -> ns3 (10.0.2.2:10018      ) MPTCP (duration   181ms) [ OK ]
# ns2 MPTCP -> ns3 (dead:beef:2::2:10019) MPTCP (duration   342ms) [ OK ]
# ns2 MPTCP -> ns3 (10.0.3.2:10020      ) MPTCP (duration   291ms) [ OK ]
# ns2 MPTCP -> ns3 (dead:beef:3::2:10021) MPTCP (duration   229ms) [ OK ]
# ns2 MPTCP -> ns4 (10.0.3.1:10022      ) MPTCP (duration   349ms) [ OK ]
# ns2 MPTCP -> ns4 (dead:beef:3::1:10023) MPTCP (duration   523ms) [ OK ]
# ns3 MPTCP -> ns1 (10.0.1.1:10024      ) MPTCP (duration   158ms) [ OK ]
# ns3 MPTCP -> ns1 (dead:beef:1::1:10025) MPTCP (duration   160ms) [ OK ]
# ns3 MPTCP -> ns2 (10.0.1.2:10026      ) MPTCP Unexpected revents: POLLERR/POLLNVAL(18)
# copyfd_io_poll: poll timed out (events: POLLIN 1, POLLOUT 4)
# (duration 30128ms) [ FAIL ] client exit code 5, server 2
#
# netns ns2-60a3a9bb-TUMABI socket stat for 10026:
# Netid State Recv-Q Send-Q Local Address:Port Peer Address:PortProcess
# TcpPassiveOpens                 1                  0.0
# TcpEstabResets                  1                  0.0
# TcpInSegs                       18                 0.0
# TcpOutSegs                      16                 0.0
# TcpOutRsts                      1                  0.0
# TcpExtSyncookiesSent            1                  0.0
# TcpExtSyncookiesRecv            2                  0.0
# TcpExtSyncookiesFailed          1                  0.0
# TcpExtTCPPureAcks               1                  0.0
# TcpExtTCPReqQFullDoCookies      1                  0.0
# TcpExtTCPRcvCoalesce            1                  0.0
# TcpExtTCPOFOQueue               5                  0.0
# TcpExtTCPOrigDataSent           10                 0.0
# TcpExtTCPAckCompressed          1                  0.0
# MPTcpExtMPCapableSYNRX          1                  0.0
# MPTcpExtMPCapableACKRX          1                  0.0
#
# netns ns3-60a3a9bb-TUMABI socket stat for 10026:
# Netid State Recv-Q Send-Q Local Address:Port Peer Address:PortProcess
# TcpActiveOpens                  1                  0.0
# TcpEstabResets                  1                  0.0
# TcpInSegs                       11                 0.0
# TcpOutSegs                      18                 0.0
# TcpOutRsts                      9                  0.0
# TcpExtTCPOrigDataSent           7                  0.0
# TcpExtTCPDelivered              1                  0.0
# MPTcpExtMPCapableSYNTX          1                  0.0
# MPTcpExtMPCapableSYNACKRX       1                  0.0
#
# ns3 MPTCP -> ns2 (dead:beef:1::2:10027) MPTCP (duration   313ms) [ OK ]
# ns3 MPTCP -> ns2 (10.0.2.1:10028      ) MPTCP (duration   172ms) [ OK ]
# ns3 MPTCP -> ns2 (dead:beef:2::1:10029) MPTCP (duration   185ms) [ OK ]
# ns3 MPTCP -> ns4 (10.0.3.1:10030      ) MPTCP (duration   140ms) [ OK ]
# ns3 MPTCP -> ns4 (dead:beef:3::1:10031) MPTCP (duration   121ms) [ OK ]
# ns4 MPTCP -> ns1 (10.0.1.1:10032      ) MPTCP (duration   168ms) [ OK ]
# FAIL: Could not even run loopback test

To reproduce:

cd <kernel source code>

echo 'i=0; while run_selftest_one ./mptcp_connect.sh -l "0.99%" -d 4 -r "94% 99%"; do date -R; i=$((i+1)); echo $i; done'  > .virtme-exec-run

docker pull mptcp/mptcp-upstream-virtme-docker:latest

docker run -v "${PWD}:${PWD}:rw" -w "${PWD}" --privileged --rm -it \
  mptcp/mptcp-upstream-virtme-docker:latest \
  expect-normal

@matttbe
Copy link
Member Author

matttbe commented May 18, 2021

Now running with perf (compiled from the source tree) using:

perf probe -a subflow_error_report
cd tools/testing/selftests/net/mptcp
i=0
while true; do
        perf record -ag -e probe:subflow_error_report ./mptcp_connect.sh -l "0.99%" -d 4 -r "94% 99%" || break
        date -R; echo "=== $i ==="; i=$((i+1))
done

The first perf record command causes this warning:

++ perf probe -a subflow_error_report
Failed to find scope of probe point.
Added new event:
probe:subflow_error_report (on subflow_error_report)
You can now use it in all perf tools, such as:
perf record -e probe:subflow_error_report -aR sleep 1
++ i=0
++ true
++ perf record -ag -e probe:subflow_error_report ./mptcp_connect.sh -l 0.99% -d 4 -r '94% 99%'
[    8.400216] unchecked MSR access error: WRMSR to 0xc0010000 (tried to write 0x0000020000110076) at rIP: 0xffffffffb1c075ff amd_pmu_disable_event (arch/x86/include/asm/msr.h:103)
[    8.401044] Call Trace:
[    8.401044] x86_pmu_stop (arch/x86/events/core.c:1594)
[    8.401044] x86_pmu_del (arch/x86/events/core.c:1632)
[    8.401044] event_sched_out.isra.0 (kernel/events/core.c:2284)
[    8.401044] __perf_remove_from_context (kernel/events/core.c:2352)
[    8.401044] event_function (kernel/events/core.c:252)
[    8.401044] remote_function (kernel/events/core.c:91)
[    8.401044] generic_exec_single (kernel/smp.c:518)
[    8.401044] smp_call_function_single (kernel/smp.c:440)
[    8.401044] ? sw_perf_event_destroy (kernel/events/core.c:72)
[    8.401044] ? sw_perf_event_destroy (kernel/events/core.c:72)
[    8.401044] event_function_call (kernel/events/core.c:280)
[    8.401044] ? perf_group_detach (kernel/events/core.c:2343)
[    8.401044] ? cpu_clock_event_read (kernel/events/core.c:214)
[    8.401044] perf_event_release_kernel (kernel/events/core.c:5074 (discriminator 3))
[    8.401044] perf_release (kernel/events/core.c:5170)
[    8.401044] __fput (fs/file_table.c:280)
[    8.401044] task_work_run (kernel/task_work.c:164 (discriminator 1))
[    8.401044] exit_to_user_mode_prepare (include/linux/tracehook.h:189)
[    8.401044] syscall_exit_to_user_mode (arch/x86/include/asm/jump_label.h:35)
[    8.401044] ? __x64_sys_close (fs/open.c:1317)
[    8.401044] do_syscall_64 (arch/x86/entry/common.c:58)
[    8.401044] entry_SYSCALL_64_after_hwframe (arch/x86/entry/entry_64.S:112)
[    8.401044] RIP: 0033:0x7f3cdd1ad3d7
[ 8.401044] Code: 64 89 02 48 c7 c0 ff ff ff ff eb bb 0f 1f 80 00 00 00 00 f3 0f 1e fa 64 8b 04 25 18 00 00 00 85 c0 75 10 b8 03 00 00 00 0f 05 <48> 3d 00 f0 ff ff 77 41 c3 48 83 ec 18 89 7c 24 0c e8 f3 fb ff ff
All code
========
All code
========
   0:   64 89 02                mov    %eax,%fs:(%rdx)
   3:   48 c7 c0 ff ff ff ff    mov    $0xffffffffffffffff,%rax
   a:   eb bb                   jmp    0xffffffffffffffc7
   c:   0f 1f 80 00 00 00 00    nopl   0x0(%rax)
  13:   f3 0f 1e fa             endbr64
  17:   64 8b 04 25 18 00 00    mov    %fs:0x18,%eax
  1e:   00
  1f:   85 c0                   test   %eax,%eax
  21:   75 10                   jne    0x33
  23:   b8 03 00 00 00          mov    $0x3,%eax
  28:   0f 05                   syscall
  2a:*  48 3d 00 f0 ff ff       cmp    $0xfffffffffffff000,%rax         <-- trapping instruction
  30:   77 41                   ja     0x73
  32:   c3                      retq
  33:   48 83 ec 18             sub    $0x18,%rsp
  37:   89 7c 24 0c             mov    %edi,0xc(%rsp)
  3b:   e8 f3 fb ff ff          callq  0xfffffffffffffc33
Code starting with the faulting instruction
===========================================
   0:   48 3d 00 f0 ff ff       cmp    $0xfffffffffffff000,%rax
   6:   77 41                   ja     0x49
   8:   c3                      retq
   9:   48 83 ec 18             sub    $0x18,%rsp
   d:   89 7c 24 0c             mov    %edi,0xc(%rsp)
  11:   e8 f3 fb ff ff          callq  0xfffffffffffffc09
[    8.401044] RSP: 002b:00007fff82367678 EFLAGS: 00000246 ORIG_RAX: 0000000000000003
[    8.401044] RAX: 0000000000000000 RBX: 00007fff82367690 RCX: 00007f3cdd1ad3d7
[    8.401044] RDX: 0000000000000000 RSI: 00000000ffffffff RDI: 0000000000000005
[    8.401044] RBP: 00007fff823676f0 R08: 0000000000000008 R09: 00007fff823676b0
[    8.401044] R10: 00000000ffffffff R11: 0000000000000246 R12: 0000000000000008
[    8.401044] R13: 000055a4d47d0120 R14: 000055a4d47d0130 R15: 000055a4d47d0f30
Couldn't synthesize bpf events.

But it seems to create perf.data file.
No issues after 100 iterations so far.

@matttbe
Copy link
Member Author

matttbe commented May 21, 2021

When trying to reproduce #172 #192 #195 with:

e47b2343a40c mptcp: update selftest for fallback due to OoO # v3
ef7fd26ee2dd mptcp: do not reset MP_CAPABLE subflow on mapping errors # v3
fc97d746905f mptcp: always parse mptcp options for MPC reqsk # v3
b22ab60dd7e4 Squash-to: "mptcp: fix sk_forward_memory corruption under memory pressure" # v1

I got:

++ ./mptcp_connect.sh -l 0.99% -d 4 -r '94% 99%'
[32183.066996] IPv6: ADDRCONF(NETDEV_CHANGE): ns1eth2: link becomes ready
[32183.415167] IPv6: ADDRCONF(NETDEV_CHANGE): ns2eth3: link becomes ready
[32183.790694] IPv6: ADDRCONF(NETDEV_CHANGE): ns4eth3: link becomes ready
[32183.796673] IPv6: ADDRCONF(NETDEV_CHANGE): ns3eth4: link becomes ready
# INFO: set ns3-60a73057-8SHPSw dev ns3eth2: ethtool -K  gso off
# INFO: set ns4-60a73057-8SHPSw dev ns4eth3: ethtool -K tso off gso off gro off
# Created /tmp/tmp.nJ5liOiWtv (size 2154524     /tmp/tmp.nJ5liOiWtv) containing data sent by client
# Created /tmp/tmp.HTzxyq1Hm4 (size 276508      /tmp/tmp.HTzxyq1Hm4) containing data sent by server
# New MPTCP socket can be blocked via sysctl            [ OK ]
# setsockopt(..., TCP_ULP, "mptcp", ...) blocked        [ OK ]
# INFO: validating network environment with pings
# INFO: Using loss of 0.99% delay 4 ms reorder 94% 99% with delay 1ms on ns3eth4
# ns1 MPTCP -> ns1 (10.0.1.1:10000      ) MPTCP (duration   256ms) [ OK ]
# ns1 MPTCP -> ns1 (10.0.1.1:10001      ) TCP   (duration   103ms) [ OK ]
# ns1 TCP   -> ns1 (10.0.1.1:10002      ) MPTCP (duration   142ms) [ OK ]
# ns1 MPTCP -> ns1 (dead:beef:1::1:10003) MPTCP (duration   144ms) [ OK ]
# ns1 MPTCP -> ns1 (dead:beef:1::1:10004) TCP   (duration   102ms) [ OK ]
# ns1 TCP   -> ns1 (dead:beef:1::1:10005) MPTCP (duration    99ms) [ OK ]
# ns1 MPTCP -> ns2 (10.0.1.2:10006      ) MPTCP (duration   158ms) [ OK ]
# ns1 MPTCP -> ns2 (dead:beef:1::2:10007) MPTCP (duration   146ms) [ OK ]
# ns1 MPTCP -> ns2 (10.0.2.1:10008      ) MPTCP (duration   361ms) [ OK ]
# ns1 MPTCP -> ns2 (dead:beef:2::1:10009) MPTCP (duration   362ms) [ OK ]
# ns1 MPTCP -> ns3 (10.0.2.2:10010      ) MPTCP (duration   607ms) [ OK ]
# ns1 MPTCP -> ns3 (dead:beef:2::2:10011) MPTCP (duration   144ms) [ OK ]
# ns1 MPTCP -> ns3 (10.0.3.2:10012      ) MPTCP (duration   160ms) [ OK ]
# ns1 MPTCP -> ns3 (dead:beef:3::2:10013) MPTCP (duration   431ms) [ OK ]
# ns1 MPTCP -> ns4 (10.0.3.1:10014      ) MPTCP (duration   140ms) [ OK ]
# ns1 MPTCP -> ns4 (dead:beef:3::1:10015) MPTCP (duration   473ms) [ OK ]
# ns2 MPTCP -> ns1 (10.0.1.1:10016      ) MPTCP (duration   153ms) [ OK ]
# ns2 MPTCP -> ns1 (dead:beef:1::1:10017) MPTCP (duration   141ms) [ OK ]
# ns2 MPTCP -> ns3 (10.0.2.2:10018      ) MPTCP (duration   149ms) [ OK ]
# ns2 MPTCP -> ns3 (dead:beef:2::2:10019) MPTCP (duration   144ms) [ OK ]
# ns2 MPTCP -> ns3 (10.0.3.2:10020      ) MPTCP (duration   150ms) [ OK ]
# ns2 MPTCP -> ns3 (dead:beef:3::2:10021) MPTCP (duration   212ms) [ OK ]
# ns2 MPTCP -> ns4 (10.0.3.1:10022      ) MPTCP (duration   148ms) [ OK ]
# ns2 MPTCP -> ns4 (dead:beef:3::1:10023) MPTCP (duration   164ms) [ OK ]
# ns3 MPTCP -> ns1 (10.0.1.1:10024      ) MPTCP (duration   145ms) [ OK ]
# ns3 MPTCP -> ns1 (dead:beef:1::1:10025) MPTCP (duration   182ms) [ OK ]
# ns3 MPTCP -> ns2 (10.0.1.2:10026      ) MPTCP (duration   160ms) [ OK ]
# ns3 MPTCP -> ns2 (dead:beef:1::2:10027) MPTCP (duration   182ms) [ OK ]
# ns3 MPTCP -> ns2 (10.0.2.1:10028      ) MPTCP (duration   198ms) [ OK ]
# ns3 MPTCP -> ns2 (dead:beef:2::1:10029) MPTCP (duration   233ms) [ OK ]
# ns3 MPTCP -> ns4 (10.0.3.1:10030      ) MPTCP (duration   171ms) [ OK ]
# ns3 MPTCP -> ns4 (dead:beef:3::1:10031) MPTCP (duration   147ms) [ OK ]
# ns4 MPTCP -> ns1 (10.0.1.1:10032      ) MPTCP (duration   138ms) [ OK ]
# ns4 MPTCP -> ns1 (dead:beef:1::1:10033) MPTCP (duration   130ms) [ OK ]
# ns4 MPTCP -> ns2 (10.0.1.2:10034      ) MPTCP (duration   191ms) [ OK ]
# ns4 MPTCP -> ns2 (dead:beef:1::2:10035) MPTCP (duration   229ms) [ OK ]
# ns4 MPTCP -> ns2 (10.0.2.1:10036      ) MPTCP (duration  1272ms) [ OK ] WARN: SYNRX: expect 11, got 12 (probably retransmissions)
# ns4 MPTCP -> ns2 (dead:beef:2::1:10037) MPTCP Unexpected revents: POLLERR/POLLNVAL(18)
# copyfd_io_poll: poll timed out (events: POLLIN 1, POLLOUT 4)
# (duration 30187ms) [ FAIL ] client exit code 5, server 2
#
# netns ns2-60a73057-8SHPSw socket stat for 10037:
# Netid State Recv-Q Send-Q Local Address:Port Peer Address:PortProcess
# TcpPassiveOpens                 1                  0.0
# TcpEstabResets                  1                  0.0
# TcpInSegs                       20                 0.0
# TcpOutSegs                      17                 0.0
# TcpOutRsts                      3                  0.0
# TcpExtSyncookiesSent            1                  0.0
# TcpExtSyncookiesRecv            2                  0.0
# TcpExtSyncookiesFailed          3                  0.0
# TcpExtTCPPureAcks               1                  0.0
# TcpExtTCPReqQFullDoCookies      1                  0.0
# TcpExtTCPRcvCoalesce            4                  0.0
# TcpExtTCPOFOQueue               6                  0.0
# TcpExtTCPOrigDataSent           10                 0.0
# TcpExtTCPAckCompressed          3                  0.0
# MPTcpExtMPCapableSYNRX          1                  0.0
# MPTcpExtMPCapableACKRX          1                  0.0
#
# netns ns4-60a73057-8SHPSw socket stat for 10037:
# Netid State Recv-Q Send-Q Local Address:Port Peer Address:PortProcess
# TcpActiveOpens                  1                  0.0
# TcpEstabResets                  1                  0.0
# TcpInSegs                       12                 0.0
# TcpOutSegs                      20                 0.0
# TcpOutRsts                      8                  0.0
# TcpExtTCPOrigDataSent           10                 0.0
# TcpExtTCPDelivered              1                  0.0
# MPTcpExtMPCapableSYNTX          1                  0.0
# MPTcpExtMPCapableSYNACKRX       1                  0.0
#
# ns4 MPTCP -> ns3 (10.0.2.2:10038      ) MPTCP (duration   157ms) [ OK ]
# ns4 MPTCP -> ns3 (dead:beef:2::2:10039) MPTCP (duration   161ms) [ OK ]
# ns4 MPTCP -> ns3 (10.0.3.2:10040      ) MPTCP (duration   151ms) [ OK ]
# ns4 MPTCP -> ns3 (dead:beef:3::2:10041) MPTCP (duration   166ms) [ OK ]
# INFO: with peek mode: saveWithPeek
# ns1 MPTCP -> ns1 (10.0.1.1:10042      ) MPTCP (duration   149ms) [ OK ]
# ns1 MPTCP -> ns1 (10.0.1.1:10043      ) TCP   (duration   176ms) [ OK ]
# ns1 TCP   -> ns1 (10.0.1.1:10044      ) MPTCP (duration    99ms) [ OK ]
# ns1 MPTCP -> ns1 (dead:beef:1::1:10045) MPTCP (duration   142ms) [ OK ]
# ns1 MPTCP -> ns1 (dead:beef:1::1:10046) TCP   (duration    96ms) [ OK ]
# ns1 TCP   -> ns1 (dead:beef:1::1:10047) MPTCP (duration   111ms) [ OK ]
# INFO: with peek mode: saveAfterPeek
# ns1 MPTCP -> ns1 (10.0.1.1:10048      ) MPTCP (duration   183ms) [ OK ]
# ns1 MPTCP -> ns1 (10.0.1.1:10049      ) TCP   (duration   145ms) [ OK ]
# ns1 TCP   -> ns1 (10.0.1.1:10050      ) MPTCP (duration    97ms) [ OK ]
# ns1 MPTCP -> ns1 (dead:beef:1::1:10051) MPTCP (duration   145ms) [ OK ]
# ns1 MPTCP -> ns1 (dead:beef:1::1:10052) TCP   (duration   147ms) [ OK ]
# ns1 TCP   -> ns1 (dead:beef:1::1:10053) MPTCP (duration   123ms) [ OK ]
# Time: 104 seconds
++ rc=1

fengguang pushed a commit to 0day-ci/linux that referenced this issue May 22, 2021
When some mapping related errors occours we close the main
MPC subflow with a RST. We should instead fallback gracefully
to TCP, and do the reset only for MPJ subflows.

Fixes: d22f498 ("mptcp: process MP_CAPABLE data option")
Closes: multipath-tcp/mptcp_net-next#192
Signed-off-by: Paolo Abeni <pabeni@redhat.com>
@matttbe
Copy link
Member Author

matttbe commented May 25, 2021

Using the same source code, I tried to get more details with perf:

++ perf probe -a tcp_send_active_reset
++ cd tools/testing/selftests/net/mptcp

(...)

=== 1569 ===
++ i=1570
++ true
++ perf record -ag -e probe:tcp_send_active_reset ./mptcp_connect.sh -l 0.99% -d 4 -r '94% 99%'
Couldn't synthesize bpf events.
[117382.359674] IPv6: ADDRCONF(NETDEV_CHANGE): ns1eth2: link becomes ready
[117382.758244] IPv6: ADDRCONF(NETDEV_CHANGE): ns2eth3: link becomes ready
[117383.017189] IPv6: ADDRCONF(NETDEV_CHANGE): ns2eth1: link becomes ready
[117383.195538] IPv6: ADDRCONF(NETDEV_CHANGE): ns4eth3: link becomes ready
[117383.201535] IPv6: ADDRCONF(NETDEV_CHANGE): ns3eth4: link becomes ready
INFO: set ns3-60a9970c-1K6Du5 dev ns3eth2: ethtool -K tso off gso off
Created /tmp/tmp.iuXQE2cxQT (size 6271004       /tmp/tmp.iuXQE2cxQT) containing data sent by client
Created /tmp/tmp.5fQKZGn8Oq (size 1394716       /tmp/tmp.5fQKZGn8Oq) containing data sent by server
New MPTCP socket can be blocked via sysctl              [ OK ]
setsockopt(..., TCP_ULP, "mptcp", ...) blocked  [ OK ]
INFO: validating network environment with pings
INFO: Using loss of 0.99% delay 4 ms reorder 94% 99% with delay 1ms on ns3eth4
ns1 MPTCP -> ns1 (10.0.1.1:10000      ) MPTCP   (duration   153ms) [ OK ]
ns1 MPTCP -> ns1 (10.0.1.1:10001      ) TCP     (duration   119ms) [ OK ]
ns1 TCP   -> ns1 (10.0.1.1:10002      ) MPTCP   (duration   107ms) [ OK ]
ns1 MPTCP -> ns1 (dead:beef:1::1:10003) MPTCP   (duration   142ms) [ OK ]
ns1 MPTCP -> ns1 (dead:beef:1::1:10004) TCP     (duration   168ms) [ OK ]
ns1 TCP   -> ns1 (dead:beef:1::1:10005) MPTCP   (duration   124ms) [ OK ]
ns1 MPTCP -> ns2 (10.0.1.2:10006      ) MPTCP   (duration   121ms) [ OK ]
ns1 MPTCP -> ns2 (dead:beef:1::2:10007) MPTCP   (duration   102ms) [ OK ]
ns1 MPTCP -> ns2 (10.0.2.1:10008      ) MPTCP   (duration   142ms) [ OK ]
ns1 MPTCP -> ns2 (dead:beef:2::1:10009) MPTCP   (duration   129ms) [ OK ]
ns1 MPTCP -> ns3 (10.0.2.2:10010      ) MPTCP   (duration   148ms) [ OK ]
ns1 MPTCP -> ns3 (dead:beef:2::2:10011) MPTCP   (duration   173ms) [ OK ]
ns1 MPTCP -> ns3 (10.0.3.2:10012      ) MPTCP   (duration   162ms) [ OK ]
ns1 MPTCP -> ns3 (dead:beef:3::2:10013) MPTCP   (duration   213ms) [ OK ]
ns1 MPTCP -> ns4 (10.0.3.1:10014      ) MPTCP   (duration   703ms) [ OK ]
ns1 MPTCP -> ns4 (dead:beef:3::1:10015) MPTCP   (duration   217ms) [ OK ]
ns2 MPTCP -> ns1 (10.0.1.1:10016      ) MPTCP   (duration   108ms) [ OK ]
ns2 MPTCP -> ns1 (dead:beef:1::1:10017) MPTCP   (duration   102ms) [ OK ]
ns2 MPTCP -> ns3 (10.0.2.2:10018      ) MPTCP   (duration   230ms) [ OK ]
ns2 MPTCP -> ns3 (dead:beef:2::2:10019) MPTCP   (duration   207ms) [ OK ]
ns2 MPTCP -> ns3 (10.0.3.2:10020      ) MPTCP   (duration   214ms) [ OK ]
ns2 MPTCP -> ns3 (dead:beef:3::2:10021) MPTCP   (duration   305ms) [ OK ]
ns2 MPTCP -> ns4 (10.0.3.1:10022      ) MPTCP   (duration   234ms) [ OK ]
ns2 MPTCP -> ns4 (dead:beef:3::1:10023) MPTCP   (duration   308ms) [ OK ]
ns3 MPTCP -> ns1 (10.0.1.1:10024      ) MPTCP   (duration   209ms) [ OK ]
ns3 MPTCP -> ns1 (dead:beef:1::1:10025) MPTCP   (duration   167ms) [ OK ]
ns3 MPTCP -> ns2 (10.0.1.2:10026      ) MPTCP   (duration   250ms) [ OK ]
ns3 MPTCP -> ns2 (dead:beef:1::2:10027) MPTCP   (duration   263ms) [ OK ]
ns3 MPTCP -> ns2 (10.0.2.1:10028      ) MPTCP   (duration   201ms) [ OK ]
ns3 MPTCP -> ns2 (dead:beef:2::1:10029) MPTCP   (duration   189ms) [ OK ]
ns3 MPTCP -> ns4 (10.0.3.1:10030      ) MPTCP   (duration   118ms) [ OK ]
ns3 MPTCP -> ns4 (dead:beef:3::1:10031) MPTCP   (duration   127ms) [ OK ]
ns4 MPTCP -> ns1 (10.0.1.1:10032      ) MPTCP   (duration   160ms) [ OK ]
ns4 MPTCP -> ns1 (dead:beef:1::1:10033) MPTCP   (duration   209ms) [ OK ]
ns4 MPTCP -> ns2 (10.0.1.2:10034      ) MPTCP   (duration   218ms) [ OK ]
ns4 MPTCP -> ns2 (dead:beef:1::2:10035) MPTCP   (duration   207ms) [ OK ]
ns4 MPTCP -> ns2 (10.0.2.1:10036      ) MPTCP   (duration   231ms) [ OK ]
ns4 MPTCP -> ns2 (dead:beef:2::1:10037) MPTCP   Unexpected revents: POLLERR/POLLNVAL(18)
copyfd_io_poll: poll timed out (events: POLLIN 1, POLLOUT 4)
(duration 30138ms) [ FAIL ] client exit code 5, server 2
netns ns2-60a9970c-1K6Du5 socket stat for 10037:
Netid State Recv-Q Send-Q Local Address:Port Peer Address:PortProcess
TcpPassiveOpens                 1                  0.0
TcpEstabResets                  1                  0.0
TcpInSegs                       22                 0.0
TcpOutSegs                      16                 0.0
TcpOutRsts                      1                  0.0
TcpExtSyncookiesSent            1                  0.0
TcpExtSyncookiesRecv            2                  0.0
TcpExtSyncookiesFailed          1                  0.0
TcpExtTW                        1                  0.0
TcpExtTCPPureAcks               1                  0.0
TcpExtTCPBacklogCoalesce        2                  0.0
TcpExtTCPReqQFullDoCookies      1                  0.0
TcpExtTCPRcvCoalesce            3                  0.0
TcpExtTCPOFOQueue               6                  0.0
TcpExtTCPOrigDataSent           10                 0.0
TcpExtTCPAckCompressed          2                  0.0
MPTcpExtMPCapableSYNRX          1                  0.0
MPTcpExtMPCapableACKRX          1                  0.0
netns ns4-60a9970c-1K6Du5 socket stat for 10037:
Netid State Recv-Q Send-Q Local Address:Port Peer Address:PortProcess
TcpActiveOpens                  1                  0.0
TcpEstabResets                  1                  0.0
TcpInSegs                       12                 0.0
TcpOutSegs                      22                 0.0
TcpOutRsts                      10                 0.0
TcpExtTCPOrigDataSent           10                 0.0
TcpExtTCPDelivered              1                  0.0
MPTcpExtMPCapableSYNTX          1                  0.0
MPTcpExtMPCapableSYNACKRX       1                  0.0
ns4 MPTCP -> ns3 (10.0.2.2:10038      ) MPTCP   (duration   107ms) [ OK ]
ns4 MPTCP -> ns3 (dead:beef:2::2:10039) MPTCP   (duration   110ms) [ OK ]
ns4 MPTCP -> ns3 (10.0.3.2:10040      ) MPTCP   (duration   106ms) [ OK ]
ns4 MPTCP -> ns3 (dead:beef:3::2:10041) MPTCP   (duration   143ms) [ OK ]
INFO: with peek mode: saveWithPeek
ns1 MPTCP -> ns1 (10.0.1.1:10042      ) MPTCP   (duration   150ms) [ OK ]
ns1 MPTCP -> ns1 (10.0.1.1:10043      ) TCP     (duration   105ms) [ OK ]
ns1 TCP   -> ns1 (10.0.1.1:10044      ) MPTCP   (duration    99ms) [ OK ]
ns1 MPTCP -> ns1 (dead:beef:1::1:10045) MPTCP   (duration   145ms) [ OK ]
ns1 MPTCP -> ns1 (dead:beef:1::1:10046) TCP     (duration   108ms) [ OK ]
ns1 TCP   -> ns1 (dead:beef:1::1:10047) MPTCP   (duration   120ms) [ OK ]
INFO: with peek mode: saveAfterPeek
ns1 MPTCP -> ns1 (10.0.1.1:10048      ) MPTCP   (duration   141ms) [ OK ]
ns1 MPTCP -> ns1 (10.0.1.1:10049      ) TCP     (duration   102ms) [ OK ]
ns1 TCP   -> ns1 (10.0.1.1:10050      ) MPTCP   (duration   115ms) [ OK ]
ns1 MPTCP -> ns1 (dead:beef:1::1:10051) MPTCP   (duration   139ms) [ OK ]
ns1 MPTCP -> ns1 (dead:beef:1::1:10052) TCP     (duration   140ms) [ OK ]
ns1 TCP   -> ns1 (dead:beef:1::1:10053) MPTCP   (duration   101ms) [ OK ]
Time: 100 seconds
[ perf record: Woken up 15 times to write data ]
failed to mmap file
[ perf record: Captured and wrote 3.753 MB perf.data ]

But I don't have more details, I guess we didn't send RST :-/

# ./.virtme/perf/perf report -i tools/testing/selftests/net/mptcp/perf.data -k .virtme/build/vmlinux --stdio --header
Error:
The tools/testing/selftests/net/mptcp/perf.data data has no samples!
# ========
# captured on    : Sat May 22 23:44:49 2021
# header version : 1
# data offset    : 408
# data size      : 3935112
# feat offset    : 3935520
# hostname : (none)
# os release : 5.12.0+
# perf version : 5.12.ge47b2343a40c
# arch : x86_64
# nrcpus online : 2
# nrcpus avail : 2
# cpudesc : AMD EPYC 7401P 24-Core Processor
# cpuid : AuthenticAMD,23,1,2
# total memory : 2029280 kB
# cmdline : /usr/bin/perf record -ag -e probe:tcp_send_active_reset ./mptcp_connect.sh -l 0.99% -d 4 -r 94% 99% 
# event : name = probe:tcp_send_active_reset, , id = { 17265, 17266 }, type = 2, size = 120, config = 0x847, { sample_period, sample_freq } = 1, sample_type = IP|TID|TIME|CALLCHAIN|CPU|PERIOD|RAW|IDENTIFIER, read_format = ID, disabled = 1, inherit = 1, sample_id_>
# event : name = dummy:HG, , id = { 17267, 17268 }, type = 1, size = 120, config = 0x9, { sample_period, sample_freq } = 4000, sample_type = IP|TID|TIME|CALLCHAIN|CPU|PERIOD|IDENTIFIER, read_format = ID, inherit = 1, mmap = 1, comm = 1, freq = 1, task = 1, sample>
# CPU_TOPOLOGY info available, use -I to display
# NUMA_TOPOLOGY info available, use -I to display
# pmu mappings: software = 1, uprobe = 7, cpu = 4, breakpoint = 5, tracepoint = 2, kprobe = 6, msr = 8
# CACHE info available, use -I to display
# time of first sample : 0.000000
# time of last sample : 0.000000
# sample duration :      0.000 ms
# cpu pmu capabilities: max_precise=0
# missing features: BUILD_ID BRANCH_STACK GROUP_DESC AUXTRACE STAT MEM_TOPOLOGY CLOCKID DIR_FORMAT COMPRESSED CLOCK_DATA 
# ========
#

Same with the perf.data.old file that refer to the previous test which passed with success

# ./.virtme/perf/perf report -i tools/testing/selftests/net/mptcp/perf.data.old -k .virtme/build/vmlinux --stdio --header
Error:
The tools/testing/selftests/net/mptcp/perf.data.old data has no samples!
# ========
# captured on    : Sat May 22 23:43:07 2021
# header version : 1
# data offset    : 408
# data size      : 3926528
# feat offset    : 3926936
# hostname : (none)
# os release : 5.12.0+
# perf version : 5.12.ge47b2343a40c
# arch : x86_64
# nrcpus online : 2
# nrcpus avail : 2
# cpudesc : AMD EPYC 7401P 24-Core Processor
# cpuid : AuthenticAMD,23,1,2
# total memory : 2029280 kB
# cmdline : /usr/bin/perf record -ag -e probe:tcp_send_active_reset ./mptcp_connect.sh -l 0.99% -d 4 -r 94% 99% 
# event : name = probe:tcp_send_active_reset, , id = { 17254, 17255 }, type = 2, size = 120, config = 0x847, { sample_period, sample_freq } = 1, sample_type = IP|TID|TIME|CALLCHAIN|CPU|PERIOD|RAW|IDENTIFIER, read_format = ID, disabled = 1, inherit = 1, sample_id_>
# event : name = dummy:HG, , id = { 17256, 17257 }, type = 1, size = 120, config = 0x9, { sample_period, sample_freq } = 4000, sample_type = IP|TID|TIME|CALLCHAIN|CPU|PERIOD|IDENTIFIER, read_format = ID, inherit = 1, mmap = 1, comm = 1, freq = 1, task = 1, sample>
# CPU_TOPOLOGY info available, use -I to display
# NUMA_TOPOLOGY info available, use -I to display
# pmu mappings: software = 1, uprobe = 7, cpu = 4, breakpoint = 5, tracepoint = 2, kprobe = 6, msr = 8
# CACHE info available, use -I to display
# time of first sample : 0.000000
# time of last sample : 0.000000
# sample duration :      0.000 ms
# cpu pmu capabilities: max_precise=0
# missing features: BUILD_ID BRANCH_STACK GROUP_DESC AUXTRACE STAT MEM_TOPOLOGY CLOCKID DIR_FORMAT COMPRESSED CLOCK_DATA 
# ========
#

matttbe pushed a commit that referenced this issue May 25, 2021
When some mapping related errors occurs we close the main
MPC subflow with a RST. We should instead fallback gracefully
to TCP, and do the reset only for MPJ subflows.

Fixes: d22f498 ("mptcp: process MP_CAPABLE data option")
Closes: #192
Reported-by: Matthieu Baerts <matthieu.baerts@tessares.net>
Reviewed-by: Mat Martineau <mathew.j.martineau@linux.intel.com>
Signed-off-by: Paolo Abeni <pabeni@redhat.com>
MPTCP Bugs automation moved this from Needs triage to Closed May 25, 2021
jenkins-tessares pushed a commit that referenced this issue May 26, 2021
When some mapping related errors occurs we close the main
MPC subflow with a RST. We should instead fallback gracefully
to TCP, and do the reset only for MPJ subflows.

Fixes: d22f498 ("mptcp: process MP_CAPABLE data option")
Closes: #192
Reported-by: Matthieu Baerts <matthieu.baerts@tessares.net>
Reviewed-by: Mat Martineau <mathew.j.martineau@linux.intel.com>
Signed-off-by: Paolo Abeni <pabeni@redhat.com>
jenkins-tessares pushed a commit that referenced this issue May 27, 2021
When some mapping related errors occurs we close the main
MPC subflow with a RST. We should instead fallback gracefully
to TCP, and do the reset only for MPJ subflows.

Fixes: d22f498 ("mptcp: process MP_CAPABLE data option")
Closes: #192
Reported-by: Matthieu Baerts <matthieu.baerts@tessares.net>
Reviewed-by: Mat Martineau <mathew.j.martineau@linux.intel.com>
Signed-off-by: Paolo Abeni <pabeni@redhat.com>
@matttbe
Copy link
Member Author

matttbe commented May 27, 2021

@pabeni : I just managed to repro it, after a long time using this script on 3ebd090 :

# cat .virtme-exec-run
perf probe -a tcp_send_active_reset
perf probe -a tcp_v4_send_reset
perf probe -a tcp_v6_send_reset
cd tools/testing/selftests/net/mptcp
i=1
while true; do
        perf record -ag -e probe:tcp_send_active_reset -e probe:tcp_v4_send_reset -e probe:tcp_v6_send_reset ./mptcp_connect.sh -l "0.99%" -d 4 -r "94% 99%" || break
        has_call_trace && break
        date -R
        echo "=== $i ==="
        i=$((i+1))
done

Here is the output:

=== 2556 ===
++ i=2557
++ true
++ perf record -ag -e probe:tcp_send_active_reset -e probe:tcp_v4_send_reset -e probe:tcp_v6_send_reset ./mptcp_connect.sh -l 0.99% -d 4 -r '94% 99%'
Couldn't synthesize bpf events.
[191570.880139] IPv6: ADDRCONF(NETDEV_CHANGE): ns1eth2: link becomes ready
[191571.239268] IPv6: ADDRCONF(NETDEV_CHANGE): ns2eth3: link becomes ready
[191571.581670] IPv6: ADDRCONF(NETDEV_CHANGE): ns2eth1: link becomes ready
[191571.601311] IPv6: ADDRCONF(NETDEV_CHANGE): ns4eth3: link becomes ready
[191571.607443] IPv6: ADDRCONF(NETDEV_CHANGE): ns3eth4: link becomes ready
INFO: set ns3-60afc5f4-8gmNJy dev ns3eth2: ethtool -K tso off
INFO: set ns4-60afc5f4-8gmNJy dev ns4eth3: ethtool -K  gso off gro off
Created /tmp/tmp.zMISX2g1ka (size 4576284       /tmp/tmp.zMISX2g1ka) containing data sent by client
Created /tmp/tmp.9WyphPCbnd (size 6240284       /tmp/tmp.9WyphPCbnd) containing data sent by server
New MPTCP socket can be blocked via sysctl              [ OK ]
setsockopt(..., TCP_ULP, "mptcp", ...) blocked  [ OK ]
INFO: validating network environment with pings
INFO: Using loss of 0.99% delay 4 ms reorder 94% 99% with delay 1ms on ns3eth4
ns1 MPTCP -> ns1 (10.0.1.1:10000      ) MPTCP   (duration   143ms) [ OK ]
ns1 MPTCP -> ns1 (10.0.1.1:10001      ) TCP     (duration   113ms) [ OK ]
ns1 TCP   -> ns1 (10.0.1.1:10002      ) MPTCP   (duration   126ms) [ OK ]
ns1 MPTCP -> ns1 (dead:beef:1::1:10003) MPTCP   (duration   163ms) [ OK ]
ns1 MPTCP -> ns1 (dead:beef:1::1:10004) TCP     (duration   111ms) [ OK ]
ns1 TCP   -> ns1 (dead:beef:1::1:10005) MPTCP   (duration   120ms) [ OK ]
ns1 MPTCP -> ns2 (10.0.1.2:10006      ) MPTCP   (duration   139ms) [ OK ]
ns1 MPTCP -> ns2 (dead:beef:1::2:10007) MPTCP   (duration   175ms) [ OK ]
ns1 MPTCP -> ns2 (10.0.2.1:10008      ) MPTCP   (duration   151ms) [ OK ]
ns1 MPTCP -> ns2 (dead:beef:2::1:10009) MPTCP   (duration   139ms) [ OK ]
ns1 MPTCP -> ns3 (10.0.2.2:10010      ) MPTCP   (duration   199ms) [ OK ]
ns1 MPTCP -> ns3 (dead:beef:2::2:10011) MPTCP   (duration   606ms) [ OK ]
ns1 MPTCP -> ns3 (10.0.3.2:10012      ) MPTCP   (duration   179ms) [ OK ]
ns1 MPTCP -> ns3 (dead:beef:3::2:10013) MPTCP   (duration   203ms) [ OK ]
ns1 MPTCP -> ns4 (10.0.3.1:10014      ) MPTCP   (duration   199ms) [ OK ]
ns1 MPTCP -> ns4 (dead:beef:3::1:10015) MPTCP   (duration   268ms) [ OK ]
ns2 MPTCP -> ns1 (10.0.1.1:10016      ) MPTCP   (duration   127ms) [ OK ]
ns2 MPTCP -> ns1 (dead:beef:1::1:10017) MPTCP   (duration   134ms) [ OK ]
ns2 MPTCP -> ns3 (10.0.2.2:10018      ) MPTCP   (duration   169ms) [ OK ]
ns2 MPTCP -> ns3 (dead:beef:2::2:10019) MPTCP   (duration   208ms) [ OK ]
ns2 MPTCP -> ns3 (10.0.3.2:10020      ) MPTCP   (duration   188ms) [ OK ]
ns2 MPTCP -> ns3 (dead:beef:3::2:10021) MPTCP   (duration   175ms) [ OK ]
ns2 MPTCP -> ns4 (10.0.3.1:10022      ) MPTCP   (duration   229ms) [ OK ]
ns2 MPTCP -> ns4 (dead:beef:3::1:10023) MPTCP   (duration   285ms) [ OK ]
ns3 MPTCP -> ns1 (10.0.1.1:10024      ) MPTCP   (duration   172ms) [ OK ]
ns3 MPTCP -> ns1 (dead:beef:1::1:10025) MPTCP   (duration   183ms) [ OK ]
ns3 MPTCP -> ns2 (10.0.1.2:10026      ) MPTCP   (duration   218ms) [ OK ]
ns3 MPTCP -> ns2 (dead:beef:1::2:10027) MPTCP   (duration   365ms) [ OK ]
ns3 MPTCP -> ns2 (10.0.2.1:10028      ) MPTCP   (duration  1071ms) [ OK ]
ns3 MPTCP -> ns2 (dead:beef:2::1:10029) MPTCP   (duration   160ms) [ OK ]
ns3 MPTCP -> ns4 (10.0.3.1:10030      ) MPTCP   (duration   126ms) [ OK ]
ns3 MPTCP -> ns4 (dead:beef:3::1:10031) MPTCP   (duration   138ms) [ OK ]
ns4 MPTCP -> ns1 (10.0.1.1:10032      ) MPTCP   (duration   275ms) [ OK ]
ns4 MPTCP -> ns1 (dead:beef:1::1:10033) MPTCP   (duration   188ms) [ OK ]
ns4 MPTCP -> ns2 (10.0.1.2:10034      ) MPTCP   (duration   174ms) [ OK ]
ns4 MPTCP -> ns2 (dead:beef:1::2:10035) MPTCP   (duration   188ms) [ OK ]
ns4 MPTCP -> ns2 (10.0.2.1:10036      ) MPTCP   Unexpected revents: POLLERR/POLLNVAL(18)
copyfd_io_poll: poll timed out (events: POLLIN 1, POLLOUT 4)
(duration 30121ms) [ FAIL ] client exit code 5, server 2
netns ns2-60afc5f4-8gmNJy socket stat for 10036:
Netid State Recv-Q Send-Q Local Address:Port Peer Address:PortProcess
TcpPassiveOpens                 1                  0.0
TcpEstabResets                  1                  0.0
TcpInSegs                       20                 0.0
TcpOutSegs                      16                 0.0
TcpOutRsts                      1                  0.0
TcpExtSyncookiesSent            1                  0.0
TcpExtSyncookiesRecv            2                  0.0
TcpExtSyncookiesFailed          1                  0.0
TcpExtTW                        4                  0.0
TcpExtTCPPureAcks               1                  0.0
TcpExtTCPReqQFullDoCookies      1                  0.0
TcpExtTCPRcvCoalesce            5                  0.0
TcpExtTCPOFOQueue               7                  0.0
TcpExtTCPOrigDataSent           10                 0.0
TcpExtTCPAckCompressed          3                  0.0
MPTcpExtMPCapableSYNRX          1                  0.0
MPTcpExtMPCapableACKRX          1                  0.0
netns ns4-60afc5f4-8gmNJy socket stat for 10036:
Netid State Recv-Q Send-Q Local Address:Port Peer Address:PortProcess
TcpActiveOpens                  1                  0.0
TcpEstabResets                  1                  0.0
TcpInSegs                       11                 0.0
TcpOutSegs                      20                 0.0
TcpOutRsts                      9                  0.0
TcpExtTCPOrigDataSent           9                  0.0
TcpExtTCPDelivered              1                  0.0
MPTcpExtMPCapableSYNTX          1                  0.0
MPTcpExtMPCapableSYNACKRX       1                  0.0
ns4 MPTCP -> ns2 (dead:beef:2::1:10037) MPTCP   (duration   192ms) [ OK ]
ns4 MPTCP -> ns3 (10.0.2.2:10038      ) MPTCP   (duration   138ms) [ OK ]
ns4 MPTCP -> ns3 (dead:beef:2::2:10039) MPTCP   (duration   144ms) [ OK ]
ns4 MPTCP -> ns3 (10.0.3.2:10040      ) MPTCP   (duration   124ms) [ OK ]
ns4 MPTCP -> ns3 (dead:beef:3::2:10041) MPTCP   (duration   141ms) [ OK ]
INFO: with peek mode: saveWithPeek
ns1 MPTCP -> ns1 (10.0.1.1:10042      ) MPTCP   (duration   173ms) [ OK ]
ns1 MPTCP -> ns1 (10.0.1.1:10043      ) TCP     (duration   124ms) [ OK ]
ns1 TCP   -> ns1 (10.0.1.1:10044      ) MPTCP   (duration   106ms) [ OK ]
ns1 MPTCP -> ns1 (dead:beef:1::1:10045) MPTCP   (duration   162ms) [ OK ]
ns1 MPTCP -> ns1 (dead:beef:1::1:10046) TCP     (duration   124ms) [ OK ]
ns1 TCP   -> ns1 (dead:beef:1::1:10047) MPTCP   (duration    98ms) [ OK ]
INFO: with peek mode: saveAfterPeek
ns1 MPTCP -> ns1 (10.0.1.1:10048      ) MPTCP   (duration   206ms) [ OK ]
ns1 MPTCP -> ns1 (10.0.1.1:10049      ) TCP     (duration   122ms) [ OK ]
ns1 TCP   -> ns1 (10.0.1.1:10050      ) MPTCP   (duration   126ms) [ OK ]
ns1 MPTCP -> ns1 (dead:beef:1::1:10051) MPTCP   (duration   160ms) [ OK ]
ns1 MPTCP -> ns1 (dead:beef:1::1:10052) TCP     (duration   113ms) [ OK ]
ns1 TCP   -> ns1 (dead:beef:1::1:10053) MPTCP   (duration   115ms) [ OK ]
Time: 100 seconds
[ perf record: Woken up 14 times to write data ]
failed to mmap file
[ perf record: Captured and wrote 3.731 MB perf.data ]

And info from perf:

# ./.virtme/perf/perf report -i tools/testing/selftests/net/mptcp/perf.data -k .virtme/build/vmlinux --stdio --header
# ========
# captured on    : Thu May 27 16:18:33 2021
# header version : 1
# data offset    : 712
# data size      : 3911616
# feat offset    : 3912328
# hostname : (none)
# os release : 5.12.0+
# perf version : 5.12.g3ebd090a8fbc
# arch : x86_64
# nrcpus online : 2
# nrcpus avail : 2
# cpudesc : AMD EPYC 7401P 24-Core Processor
# cpuid : AuthenticAMD,23,1,2
# total memory : 2029280 kB
# cmdline : /usr/bin/perf record -ag -e probe:tcp_send_active_reset -e probe:tcp_v4_send_reset -e probe:tcp_v6_send_reset ./mptcp_connect.sh -l 0.99% -d 4 -r 94% 99% 
# event : name = probe:tcp_send_active_reset, , id = { 38346, 38347 }, type = 2, size = 120, config = 0x847, { sample_period, sample_freq } = 1, sample_type = IP|TID|TIME|CALLCHAIN|CPU|PERIOD|RAW|IDENTIFIER, read_format = ID, disabled = 1, inherit = 1, sample_id_all = 1, exclude_guest = 1
# event : name = probe:tcp_v4_send_reset, , id = { 38348, 38349 }, type = 2, size = 120, config = 0x848, { sample_period, sample_freq } = 1, sample_type = IP|TID|TIME|CALLCHAIN|CPU|PERIOD|RAW|IDENTIFIER, read_format = ID, disabled = 1, inherit = 1, sample_id_all = 1, exclude_guest = 1
# event : name = probe:tcp_v6_send_reset, , id = { 38350, 38351 }, type = 2, size = 120, config = 0x849, { sample_period, sample_freq } = 1, sample_type = IP|TID|TIME|CALLCHAIN|CPU|PERIOD|RAW|IDENTIFIER, read_format = ID, disabled = 1, inherit = 1, sample_id_all = 1, exclude_guest = 1
# event : name = dummy:HG, , id = { 38352, 38353 }, type = 1, size = 120, config = 0x9, { sample_period, sample_freq } = 4000, sample_type = IP|TID|TIME|CALLCHAIN|CPU|PERIOD|IDENTIFIER, read_format = ID, inherit = 1, mmap = 1, comm = 1, freq = 1, task = 1, sample_id_all = 1, mmap2 = 1, comm_exec = 1, ksymbol = 1, bpf_event = 1
# CPU_TOPOLOGY info available, use -I to display
# NUMA_TOPOLOGY info available, use -I to display
# pmu mappings: software = 1, uprobe = 7, cpu = 4, breakpoint = 5, tracepoint = 2, kprobe = 6, msr = 8
# CACHE info available, use -I to display
# time of first sample : 0.000000
# time of last sample : 0.000000
# sample duration :      0.000 ms
# cpu pmu capabilities: max_precise=0
# missing features: BUILD_ID BRANCH_STACK GROUP_DESC AUXTRACE STAT MEM_TOPOLOGY CLOCKID DIR_FORMAT COMPRESSED CLOCK_DATA 
# ========
#
#
# Total Lost Samples: 0
#
# Samples: 15  of event 'probe:tcp_v4_send_reset'
# Event count (approx.): 15
#
# Children      Self  Command        Shared Object      Symbol                             
# ........  ........  .............  .................  ...................................
#
    60.00%    60.00%  swapper        [kernel.kallsyms]  [k] tcp_v4_send_reset
            |
            ---secondary_startup_64_no_verify
               start_secondary
               cpu_startup_entry
               do_idle
               default_idle_call
               default_idle
               asm_sysvec_apic_timer_interrupt
               sysvec_apic_timer_interrupt
               irq_exit_rcu
               __do_softirq
               net_rx_action
               __napi_poll
               process_backlog
               __netif_receive_skb_one_core
               ip_rcv
               ip_local_deliver
               ip_local_deliver_finish
               ip_protocol_deliver_rcu
               tcp_v4_rcv
               tcp_v4_send_reset

    60.00%     0.00%  swapper        [kernel.kallsyms]  [k] secondary_startup_64_no_verify
            |
            ---secondary_startup_64_no_verify
               start_secondary
               cpu_startup_entry
               do_idle
               default_idle_call
               default_idle
               asm_sysvec_apic_timer_interrupt
               sysvec_apic_timer_interrupt
               irq_exit_rcu
               __do_softirq
               net_rx_action
               __napi_poll
               process_backlog
               __netif_receive_skb_one_core
               ip_rcv
               ip_local_deliver
               ip_local_deliver_finish
               ip_protocol_deliver_rcu
               tcp_v4_rcv
               tcp_v4_send_reset

    60.00%     0.00%  swapper        [kernel.kallsyms]  [k] start_secondary
            |
            ---start_secondary
               cpu_startup_entry
               do_idle
               default_idle_call
               default_idle
               asm_sysvec_apic_timer_interrupt
               sysvec_apic_timer_interrupt
               irq_exit_rcu
               __do_softirq
               net_rx_action
               __napi_poll
               process_backlog
               __netif_receive_skb_one_core
               ip_rcv
               ip_local_deliver
               ip_local_deliver_finish
               ip_protocol_deliver_rcu
               tcp_v4_rcv
               tcp_v4_send_reset

    60.00%     0.00%  swapper        [kernel.kallsyms]  [k] cpu_startup_entry
            |
            ---cpu_startup_entry
               do_idle
               default_idle_call
               default_idle
               asm_sysvec_apic_timer_interrupt
               sysvec_apic_timer_interrupt
               irq_exit_rcu
               __do_softirq
               net_rx_action
               __napi_poll
               process_backlog
               __netif_receive_skb_one_core
               ip_rcv
               ip_local_deliver
               ip_local_deliver_finish
               ip_protocol_deliver_rcu
               tcp_v4_rcv
               tcp_v4_send_reset

    60.00%     0.00%  swapper        [kernel.kallsyms]  [k] do_idle
            |
            ---do_idle
               default_idle_call
               default_idle
               asm_sysvec_apic_timer_interrupt
               sysvec_apic_timer_interrupt
               irq_exit_rcu
               __do_softirq
               net_rx_action
               __napi_poll
               process_backlog
               __netif_receive_skb_one_core
               ip_rcv
               ip_local_deliver
               ip_local_deliver_finish
               ip_protocol_deliver_rcu
               tcp_v4_rcv
               tcp_v4_send_reset

    60.00%     0.00%  swapper        [kernel.kallsyms]  [k] default_idle_call
            |
            ---default_idle_call
               default_idle
               asm_sysvec_apic_timer_interrupt
               sysvec_apic_timer_interrupt
               irq_exit_rcu
               __do_softirq
               net_rx_action
               __napi_poll
               process_backlog
               __netif_receive_skb_one_core
               ip_rcv
               ip_local_deliver
               ip_local_deliver_finish
               ip_protocol_deliver_rcu
               tcp_v4_rcv
               tcp_v4_send_reset

    60.00%     0.00%  swapper        [kernel.kallsyms]  [k] default_idle
            |
            ---default_idle
               asm_sysvec_apic_timer_interrupt
               sysvec_apic_timer_interrupt
               irq_exit_rcu
               __do_softirq
               net_rx_action
               __napi_poll
               process_backlog
               __netif_receive_skb_one_core
               ip_rcv
               ip_local_deliver
               ip_local_deliver_finish
               ip_protocol_deliver_rcu
               tcp_v4_rcv
               tcp_v4_send_reset

    60.00%     0.00%  swapper        [kernel.kallsyms]  [k] asm_sysvec_apic_timer_interrupt
            |
            ---asm_sysvec_apic_timer_interrupt
               sysvec_apic_timer_interrupt
               irq_exit_rcu
               __do_softirq
               net_rx_action
               __napi_poll
               process_backlog
               __netif_receive_skb_one_core
               ip_rcv
               ip_local_deliver
               ip_local_deliver_finish
               ip_protocol_deliver_rcu
               tcp_v4_rcv
               tcp_v4_send_reset

    60.00%     0.00%  swapper        [kernel.kallsyms]  [k] sysvec_apic_timer_interrupt
            |
            ---sysvec_apic_timer_interrupt
               irq_exit_rcu
               __do_softirq
               net_rx_action
               __napi_poll
               process_backlog
               __netif_receive_skb_one_core
               ip_rcv
               ip_local_deliver
               ip_local_deliver_finish
               ip_protocol_deliver_rcu
               tcp_v4_rcv
               tcp_v4_send_reset

    60.00%     0.00%  swapper        [kernel.kallsyms]  [k] irq_exit_rcu
            |
            ---irq_exit_rcu
               __do_softirq
               net_rx_action
               __napi_poll
               process_backlog
               __netif_receive_skb_one_core
               ip_rcv
               ip_local_deliver
               ip_local_deliver_finish
               ip_protocol_deliver_rcu
               tcp_v4_rcv
               tcp_v4_send_reset

    60.00%     0.00%  swapper        [kernel.kallsyms]  [k] __do_softirq
            |
            ---__do_softirq
               net_rx_action
               __napi_poll
               process_backlog
               __netif_receive_skb_one_core
               ip_rcv
               ip_local_deliver
               ip_local_deliver_finish
               ip_protocol_deliver_rcu
               tcp_v4_rcv
               tcp_v4_send_reset

    60.00%     0.00%  swapper        [kernel.kallsyms]  [k] net_rx_action
            |
            ---net_rx_action
               __napi_poll
               process_backlog
               __netif_receive_skb_one_core
               ip_rcv
               ip_local_deliver
               ip_local_deliver_finish
               ip_protocol_deliver_rcu
               tcp_v4_rcv
               tcp_v4_send_reset

    60.00%     0.00%  swapper        [kernel.kallsyms]  [k] __napi_poll
            |
            ---__napi_poll
               process_backlog
               __netif_receive_skb_one_core
               ip_rcv
               ip_local_deliver
               ip_local_deliver_finish
               ip_protocol_deliver_rcu
               tcp_v4_rcv
               tcp_v4_send_reset

    60.00%     0.00%  swapper        [kernel.kallsyms]  [k] process_backlog
            |
            ---process_backlog
               __netif_receive_skb_one_core
               ip_rcv
               ip_local_deliver
               ip_local_deliver_finish
               ip_protocol_deliver_rcu
               tcp_v4_rcv
               tcp_v4_send_reset

    60.00%     0.00%  swapper        [kernel.kallsyms]  [k] __netif_receive_skb_one_core
            |
            ---__netif_receive_skb_one_core
               ip_rcv
               ip_local_deliver
               ip_local_deliver_finish
               ip_protocol_deliver_rcu
               tcp_v4_rcv
               tcp_v4_send_reset

    60.00%     0.00%  swapper        [kernel.kallsyms]  [k] ip_rcv
            |
            ---ip_rcv
               ip_local_deliver
               ip_local_deliver_finish
               ip_protocol_deliver_rcu
               tcp_v4_rcv
               tcp_v4_send_reset

    60.00%     0.00%  swapper        [kernel.kallsyms]  [k] ip_local_deliver
            |
            ---ip_local_deliver
               ip_local_deliver_finish
               ip_protocol_deliver_rcu
               tcp_v4_rcv
               tcp_v4_send_reset

    60.00%     0.00%  swapper        [kernel.kallsyms]  [k] ip_local_deliver_finish
            |
            ---ip_local_deliver_finish
               ip_protocol_deliver_rcu
               tcp_v4_rcv
               tcp_v4_send_reset

    60.00%     0.00%  swapper        [kernel.kallsyms]  [k] ip_protocol_deliver_rcu
            |
            ---ip_protocol_deliver_rcu
               tcp_v4_rcv
               tcp_v4_send_reset

    60.00%     0.00%  swapper        [kernel.kallsyms]  [k] tcp_v4_rcv
            |
            ---tcp_v4_rcv
               tcp_v4_send_reset

    33.33%    33.33%  ksoftirqd/0    [kernel.kallsyms]  [k] tcp_v4_send_reset
            |
            ---ret_from_fork
               kthread
               smpboot_thread_fn
               run_ksoftirqd
               __do_softirq
               net_rx_action
               __napi_poll
               process_backlog
               __netif_receive_skb_one_core
               ip_rcv
               ip_local_deliver
               ip_local_deliver_finish
               ip_protocol_deliver_rcu
               tcp_v4_rcv
               |          
               |--26.67%--tcp_v4_send_reset
               |          
                --6.67%--tcp_v4_do_rcv
                          tcp_v4_send_reset

    33.33%     0.00%  ksoftirqd/0    [kernel.kallsyms]  [k] ret_from_fork
            |
            ---ret_from_fork
               kthread
               smpboot_thread_fn
               run_ksoftirqd
               __do_softirq
               net_rx_action
               __napi_poll
               process_backlog
               __netif_receive_skb_one_core
               ip_rcv
               ip_local_deliver
               ip_local_deliver_finish
               ip_protocol_deliver_rcu
               tcp_v4_rcv
               |          
               |--26.67%--tcp_v4_send_reset
               |          
                --6.67%--tcp_v4_do_rcv
                          tcp_v4_send_reset

    33.33%     0.00%  ksoftirqd/0    [kernel.kallsyms]  [k] kthread
            |
            ---kthread
               smpboot_thread_fn
               run_ksoftirqd
               __do_softirq
               net_rx_action
               __napi_poll
               process_backlog
               __netif_receive_skb_one_core
               ip_rcv
               ip_local_deliver
               ip_local_deliver_finish
               ip_protocol_deliver_rcu
               tcp_v4_rcv
               |          
               |--26.67%--tcp_v4_send_reset
               |          
                --6.67%--tcp_v4_do_rcv
                          tcp_v4_send_reset

    33.33%     0.00%  ksoftirqd/0    [kernel.kallsyms]  [k] smpboot_thread_fn
            |
            ---smpboot_thread_fn
               run_ksoftirqd
               __do_softirq
               net_rx_action
               __napi_poll
               process_backlog
               __netif_receive_skb_one_core
               ip_rcv
               ip_local_deliver
               ip_local_deliver_finish
               ip_protocol_deliver_rcu
               tcp_v4_rcv
               |          
               |--26.67%--tcp_v4_send_reset
               |          
                --6.67%--tcp_v4_do_rcv
                          tcp_v4_send_reset

    33.33%     0.00%  ksoftirqd/0    [kernel.kallsyms]  [k] run_ksoftirqd
            |
            ---run_ksoftirqd
               __do_softirq
               net_rx_action
               __napi_poll
               process_backlog
               __netif_receive_skb_one_core
               ip_rcv
               ip_local_deliver
               ip_local_deliver_finish
               ip_protocol_deliver_rcu
               tcp_v4_rcv
               |          
               |--26.67%--tcp_v4_send_reset
               |          
                --6.67%--tcp_v4_do_rcv
                          tcp_v4_send_reset

    33.33%     0.00%  ksoftirqd/0    [kernel.kallsyms]  [k] __do_softirq
            |
            ---__do_softirq
               net_rx_action
               __napi_poll
               process_backlog
               __netif_receive_skb_one_core
               ip_rcv
               ip_local_deliver
               ip_local_deliver_finish
               ip_protocol_deliver_rcu
               tcp_v4_rcv
               |          
               |--26.67%--tcp_v4_send_reset
               |          
                --6.67%--tcp_v4_do_rcv
                          tcp_v4_send_reset

    33.33%     0.00%  ksoftirqd/0    [kernel.kallsyms]  [k] net_rx_action
            |
            ---net_rx_action
               __napi_poll
               process_backlog
               __netif_receive_skb_one_core
               ip_rcv
               ip_local_deliver
               ip_local_deliver_finish
               ip_protocol_deliver_rcu
               tcp_v4_rcv
               |          
               |--26.67%--tcp_v4_send_reset
               |          
                --6.67%--tcp_v4_do_rcv
                          tcp_v4_send_reset

    33.33%     0.00%  ksoftirqd/0    [kernel.kallsyms]  [k] __napi_poll
            |
            ---__napi_poll
               process_backlog
               __netif_receive_skb_one_core
               ip_rcv
               ip_local_deliver
               ip_local_deliver_finish
               ip_protocol_deliver_rcu
               tcp_v4_rcv
               |          
               |--26.67%--tcp_v4_send_reset
               |          
                --6.67%--tcp_v4_do_rcv
                          tcp_v4_send_reset

    33.33%     0.00%  ksoftirqd/0    [kernel.kallsyms]  [k] process_backlog
            |
            ---process_backlog
               __netif_receive_skb_one_core
               ip_rcv
               ip_local_deliver
               ip_local_deliver_finish
               ip_protocol_deliver_rcu
               tcp_v4_rcv
               |          
               |--26.67%--tcp_v4_send_reset
               |          
                --6.67%--tcp_v4_do_rcv
                          tcp_v4_send_reset

    33.33%     0.00%  ksoftirqd/0    [kernel.kallsyms]  [k] __netif_receive_skb_one_core
            |
            ---__netif_receive_skb_one_core
               ip_rcv
               ip_local_deliver
               ip_local_deliver_finish
               ip_protocol_deliver_rcu
               tcp_v4_rcv
               |          
               |--26.67%--tcp_v4_send_reset
               |          
                --6.67%--tcp_v4_do_rcv
                          tcp_v4_send_reset

    33.33%     0.00%  ksoftirqd/0    [kernel.kallsyms]  [k] ip_rcv
            |
            ---ip_rcv
               ip_local_deliver
               ip_local_deliver_finish
               ip_protocol_deliver_rcu
               tcp_v4_rcv
               |          
               |--26.67%--tcp_v4_send_reset
               |          
                --6.67%--tcp_v4_do_rcv
                          tcp_v4_send_reset

    33.33%     0.00%  ksoftirqd/0    [kernel.kallsyms]  [k] ip_local_deliver
            |
            ---ip_local_deliver
               ip_local_deliver_finish
               ip_protocol_deliver_rcu
               tcp_v4_rcv
               |          
               |--26.67%--tcp_v4_send_reset
               |          
                --6.67%--tcp_v4_do_rcv
                          tcp_v4_send_reset

    33.33%     0.00%  ksoftirqd/0    [kernel.kallsyms]  [k] ip_local_deliver_finish
            |
            ---ip_local_deliver_finish
               ip_protocol_deliver_rcu
               tcp_v4_rcv
               |          
               |--26.67%--tcp_v4_send_reset
               |          
                --6.67%--tcp_v4_do_rcv
                          tcp_v4_send_reset

    33.33%     0.00%  ksoftirqd/0    [kernel.kallsyms]  [k] ip_protocol_deliver_rcu
            |
            ---ip_protocol_deliver_rcu
               tcp_v4_rcv
               |          
               |--26.67%--tcp_v4_send_reset
               |          
                --6.67%--tcp_v4_do_rcv
                          tcp_v4_send_reset

    33.33%     0.00%  ksoftirqd/0    [kernel.kallsyms]  [k] tcp_v4_rcv
            |
            ---tcp_v4_rcv
               |          
               |--26.67%--tcp_v4_send_reset
               |          
                --6.67%--tcp_v4_do_rcv
                          tcp_v4_send_reset

     6.67%     6.67%  mptcp_connect  [kernel.kallsyms]  [k] tcp_v4_send_reset
            |
            ---write
               entry_SYSCALL_64_after_hwframe
               do_syscall_64
               ksys_write
               vfs_write
               new_sync_write
               sock_write_iter
               sock_sendmsg
               mptcp_sendmsg
               __mptcp_push_pending
               mptcp_push_release.isra.0
               __tcp_push_pending_frames
               tcp_write_xmit
               __tcp_transmit_skb
               __ip_queue_xmit
               ip_output
               ip_finish_output2
               __local_bh_enable_ip
               do_softirq
               __do_softirq
               net_rx_action
               __napi_poll
               process_backlog
               __netif_receive_skb_one_core
               ip_rcv
               ip_local_deliver
               ip_local_deliver_finish
               ip_protocol_deliver_rcu
               tcp_v4_rcv
               tcp_v4_do_rcv
               tcp_v4_send_reset

     6.67%     0.00%  ksoftirqd/0    [kernel.kallsyms]  [k] tcp_v4_do_rcv
            |
            ---tcp_v4_do_rcv
               tcp_v4_send_reset

     6.67%     0.00%  mptcp_connect  libc-2.31.so       [.] write
            |
            ---write
               entry_SYSCALL_64_after_hwframe
               do_syscall_64
               ksys_write
               vfs_write
               new_sync_write
               sock_write_iter
               sock_sendmsg
               mptcp_sendmsg
               __mptcp_push_pending
               mptcp_push_release.isra.0
               __tcp_push_pending_frames
               tcp_write_xmit
               __tcp_transmit_skb
               __ip_queue_xmit
               ip_output
               ip_finish_output2
               __local_bh_enable_ip
               do_softirq
               __do_softirq
               net_rx_action
               __napi_poll
               process_backlog
               __netif_receive_skb_one_core
               ip_rcv
               ip_local_deliver
               ip_local_deliver_finish
               ip_protocol_deliver_rcu
               tcp_v4_rcv
               tcp_v4_do_rcv
               tcp_v4_send_reset

     6.67%     0.00%  mptcp_connect  [kernel.kallsyms]  [k] entry_SYSCALL_64_after_hwframe
            |
            ---entry_SYSCALL_64_after_hwframe
               do_syscall_64
               ksys_write
               vfs_write
               new_sync_write
               sock_write_iter
               sock_sendmsg
               mptcp_sendmsg
               __mptcp_push_pending
               mptcp_push_release.isra.0
               __tcp_push_pending_frames
               tcp_write_xmit
               __tcp_transmit_skb
               __ip_queue_xmit
               ip_output
               ip_finish_output2
               __local_bh_enable_ip
               do_softirq
               __do_softirq
               net_rx_action
               __napi_poll
               process_backlog
               __netif_receive_skb_one_core
               ip_rcv
               ip_local_deliver
               ip_local_deliver_finish
               ip_protocol_deliver_rcu
               tcp_v4_rcv
               tcp_v4_do_rcv
               tcp_v4_send_reset

     6.67%     0.00%  mptcp_connect  [kernel.kallsyms]  [k] do_syscall_64
            |
            ---do_syscall_64
               ksys_write
               vfs_write
               new_sync_write
               sock_write_iter
               sock_sendmsg
               mptcp_sendmsg
               __mptcp_push_pending
               mptcp_push_release.isra.0
               __tcp_push_pending_frames
               tcp_write_xmit
               __tcp_transmit_skb
               __ip_queue_xmit
               ip_output
               ip_finish_output2
               __local_bh_enable_ip
               do_softirq
               __do_softirq
               net_rx_action
               __napi_poll
               process_backlog
               __netif_receive_skb_one_core
               ip_rcv
               ip_local_deliver
               ip_local_deliver_finish
               ip_protocol_deliver_rcu
               tcp_v4_rcv
               tcp_v4_do_rcv
               tcp_v4_send_reset

     6.67%     0.00%  mptcp_connect  [kernel.kallsyms]  [k] ksys_write
            |
            ---ksys_write
               vfs_write
               new_sync_write
               sock_write_iter
               sock_sendmsg
               mptcp_sendmsg
               __mptcp_push_pending
               mptcp_push_release.isra.0
               __tcp_push_pending_frames
               tcp_write_xmit
               __tcp_transmit_skb
               __ip_queue_xmit
               ip_output
               ip_finish_output2
               __local_bh_enable_ip
               do_softirq
               __do_softirq
               net_rx_action
               __napi_poll
               process_backlog
               __netif_receive_skb_one_core
               ip_rcv
               ip_local_deliver
               ip_local_deliver_finish
               ip_protocol_deliver_rcu
               tcp_v4_rcv
               tcp_v4_do_rcv
               tcp_v4_send_reset

     6.67%     0.00%  mptcp_connect  [kernel.kallsyms]  [k] vfs_write
            |
            ---vfs_write
               new_sync_write
               sock_write_iter
               sock_sendmsg
               mptcp_sendmsg
               __mptcp_push_pending
               mptcp_push_release.isra.0
               __tcp_push_pending_frames
               tcp_write_xmit
               __tcp_transmit_skb
               __ip_queue_xmit
               ip_output
               ip_finish_output2
               __local_bh_enable_ip
               do_softirq
               __do_softirq
               net_rx_action
               __napi_poll
               process_backlog
               __netif_receive_skb_one_core
               ip_rcv
               ip_local_deliver
               ip_local_deliver_finish
               ip_protocol_deliver_rcu
               tcp_v4_rcv
               tcp_v4_do_rcv
               tcp_v4_send_reset

     6.67%     0.00%  mptcp_connect  [kernel.kallsyms]  [k] new_sync_write
            |
            ---new_sync_write
               sock_write_iter
               sock_sendmsg
               mptcp_sendmsg
               __mptcp_push_pending
               mptcp_push_release.isra.0
               __tcp_push_pending_frames
               tcp_write_xmit
               __tcp_transmit_skb
               __ip_queue_xmit
               ip_output
               ip_finish_output2
               __local_bh_enable_ip
               do_softirq
               __do_softirq
               net_rx_action
               __napi_poll
               process_backlog
               __netif_receive_skb_one_core
               ip_rcv
               ip_local_deliver
               ip_local_deliver_finish
               ip_protocol_deliver_rcu
               tcp_v4_rcv
               tcp_v4_do_rcv
               tcp_v4_send_reset

     6.67%     0.00%  mptcp_connect  [kernel.kallsyms]  [k] sock_write_iter
            |
            ---sock_write_iter
               sock_sendmsg
               mptcp_sendmsg
               __mptcp_push_pending
               mptcp_push_release.isra.0
               __tcp_push_pending_frames
               tcp_write_xmit
               __tcp_transmit_skb
               __ip_queue_xmit
               ip_output
               ip_finish_output2
               __local_bh_enable_ip
               do_softirq
               __do_softirq
               net_rx_action
               __napi_poll
               process_backlog
               __netif_receive_skb_one_core
               ip_rcv
               ip_local_deliver
               ip_local_deliver_finish
               ip_protocol_deliver_rcu
               tcp_v4_rcv
               tcp_v4_do_rcv
               tcp_v4_send_reset

     6.67%     0.00%  mptcp_connect  [kernel.kallsyms]  [k] sock_sendmsg
            |
            ---sock_sendmsg
               mptcp_sendmsg
               __mptcp_push_pending
               mptcp_push_release.isra.0
               __tcp_push_pending_frames
               tcp_write_xmit
               __tcp_transmit_skb
               __ip_queue_xmit
               ip_output
               ip_finish_output2
               __local_bh_enable_ip
               do_softirq
               __do_softirq
               net_rx_action
               __napi_poll
               process_backlog
               __netif_receive_skb_one_core
               ip_rcv
               ip_local_deliver
               ip_local_deliver_finish
               ip_protocol_deliver_rcu
               tcp_v4_rcv
               tcp_v4_do_rcv
               tcp_v4_send_reset

     6.67%     0.00%  mptcp_connect  [kernel.kallsyms]  [k] mptcp_sendmsg
            |
            ---mptcp_sendmsg
               __mptcp_push_pending
               mptcp_push_release.isra.0
               __tcp_push_pending_frames
               tcp_write_xmit
               __tcp_transmit_skb
               __ip_queue_xmit
               ip_output
               ip_finish_output2
               __local_bh_enable_ip
               do_softirq
               __do_softirq
               net_rx_action
               __napi_poll
               process_backlog
               __netif_receive_skb_one_core
               ip_rcv
               ip_local_deliver
               ip_local_deliver_finish
               ip_protocol_deliver_rcu
               tcp_v4_rcv
               tcp_v4_do_rcv
               tcp_v4_send_reset

     6.67%     0.00%  mptcp_connect  [kernel.kallsyms]  [k] __mptcp_push_pending
            |
            ---__mptcp_push_pending
               mptcp_push_release.isra.0
               __tcp_push_pending_frames
               tcp_write_xmit
               __tcp_transmit_skb
               __ip_queue_xmit
               ip_output
               ip_finish_output2
               __local_bh_enable_ip
               do_softirq
               __do_softirq
               net_rx_action
               __napi_poll
               process_backlog
               __netif_receive_skb_one_core
               ip_rcv
               ip_local_deliver
               ip_local_deliver_finish
               ip_protocol_deliver_rcu
               tcp_v4_rcv
               tcp_v4_do_rcv
               tcp_v4_send_reset

     6.67%     0.00%  mptcp_connect  [kernel.kallsyms]  [k] mptcp_push_release.isra.0
            |
            ---mptcp_push_release.isra.0
               __tcp_push_pending_frames
               tcp_write_xmit
               __tcp_transmit_skb
               __ip_queue_xmit
               ip_output
               ip_finish_output2
               __local_bh_enable_ip
               do_softirq
               __do_softirq
               net_rx_action
               __napi_poll
               process_backlog
               __netif_receive_skb_one_core
               ip_rcv
               ip_local_deliver
               ip_local_deliver_finish
               ip_protocol_deliver_rcu
               tcp_v4_rcv
               tcp_v4_do_rcv
               tcp_v4_send_reset

     6.67%     0.00%  mptcp_connect  [kernel.kallsyms]  [k] __tcp_push_pending_frames
            |
            ---__tcp_push_pending_frames
               tcp_write_xmit
               __tcp_transmit_skb
               __ip_queue_xmit
               ip_output
               ip_finish_output2
               __local_bh_enable_ip
               do_softirq
               __do_softirq
               net_rx_action
               __napi_poll
               process_backlog
               __netif_receive_skb_one_core
               ip_rcv
               ip_local_deliver
               ip_local_deliver_finish
               ip_protocol_deliver_rcu
               tcp_v4_rcv
               tcp_v4_do_rcv
               tcp_v4_send_reset

     6.67%     0.00%  mptcp_connect  [kernel.kallsyms]  [k] tcp_write_xmit
            |
            ---tcp_write_xmit
               __tcp_transmit_skb
               __ip_queue_xmit
               ip_output
               ip_finish_output2
               __local_bh_enable_ip
               do_softirq
               __do_softirq
               net_rx_action
               __napi_poll
               process_backlog
               __netif_receive_skb_one_core
               ip_rcv
               ip_local_deliver
               ip_local_deliver_finish
               ip_protocol_deliver_rcu
               tcp_v4_rcv
               tcp_v4_do_rcv
               tcp_v4_send_reset

     6.67%     0.00%  mptcp_connect  [kernel.kallsyms]  [k] __tcp_transmit_skb
            |
            ---__tcp_transmit_skb
               __ip_queue_xmit
               ip_output
               ip_finish_output2
               __local_bh_enable_ip
               do_softirq
               __do_softirq
               net_rx_action
               __napi_poll
               process_backlog
               __netif_receive_skb_one_core
               ip_rcv
               ip_local_deliver
               ip_local_deliver_finish
               ip_protocol_deliver_rcu
               tcp_v4_rcv
               tcp_v4_do_rcv
               tcp_v4_send_reset

     6.67%     0.00%  mptcp_connect  [kernel.kallsyms]  [k] __ip_queue_xmit
            |
            ---__ip_queue_xmit
               ip_output
               ip_finish_output2
               __local_bh_enable_ip
               do_softirq
               __do_softirq
               net_rx_action
               __napi_poll
               process_backlog
               __netif_receive_skb_one_core
               ip_rcv
               ip_local_deliver
               ip_local_deliver_finish
               ip_protocol_deliver_rcu
               tcp_v4_rcv
               tcp_v4_do_rcv
               tcp_v4_send_reset

     6.67%     0.00%  mptcp_connect  [kernel.kallsyms]  [k] ip_output
            |
            ---ip_output
               ip_finish_output2
               __local_bh_enable_ip
               do_softirq
               __do_softirq
               net_rx_action
               __napi_poll
               process_backlog
               __netif_receive_skb_one_core
               ip_rcv
               ip_local_deliver
               ip_local_deliver_finish
               ip_protocol_deliver_rcu
               tcp_v4_rcv
               tcp_v4_do_rcv
               tcp_v4_send_reset

     6.67%     0.00%  mptcp_connect  [kernel.kallsyms]  [k] ip_finish_output2
            |
            ---ip_finish_output2
               __local_bh_enable_ip
               do_softirq
               __do_softirq
               net_rx_action
               __napi_poll
               process_backlog
               __netif_receive_skb_one_core
               ip_rcv
               ip_local_deliver
               ip_local_deliver_finish
               ip_protocol_deliver_rcu
               tcp_v4_rcv
               tcp_v4_do_rcv
               tcp_v4_send_reset

     6.67%     0.00%  mptcp_connect  [kernel.kallsyms]  [k] __local_bh_enable_ip
            |
            ---__local_bh_enable_ip
               do_softirq
               __do_softirq
               net_rx_action
               __napi_poll
               process_backlog
               __netif_receive_skb_one_core
               ip_rcv
               ip_local_deliver
               ip_local_deliver_finish
               ip_protocol_deliver_rcu
               tcp_v4_rcv
               tcp_v4_do_rcv
               tcp_v4_send_reset

     6.67%     0.00%  mptcp_connect  [kernel.kallsyms]  [k] do_softirq
            |
            ---do_softirq
               __do_softirq
               net_rx_action
               __napi_poll
               process_backlog
               __netif_receive_skb_one_core
               ip_rcv
               ip_local_deliver
               ip_local_deliver_finish
               ip_protocol_deliver_rcu
               tcp_v4_rcv
               tcp_v4_do_rcv
               tcp_v4_send_reset

     6.67%     0.00%  mptcp_connect  [kernel.kallsyms]  [k] __do_softirq
            |
            ---__do_softirq
               net_rx_action
               __napi_poll
               process_backlog
               __netif_receive_skb_one_core
               ip_rcv
               ip_local_deliver
               ip_local_deliver_finish
               ip_protocol_deliver_rcu
               tcp_v4_rcv
               tcp_v4_do_rcv
               tcp_v4_send_reset

     6.67%     0.00%  mptcp_connect  [kernel.kallsyms]  [k] net_rx_action
            |
            ---net_rx_action
               __napi_poll
               process_backlog
               __netif_receive_skb_one_core
               ip_rcv
               ip_local_deliver
               ip_local_deliver_finish
               ip_protocol_deliver_rcu
               tcp_v4_rcv
               tcp_v4_do_rcv
               tcp_v4_send_reset

     6.67%     0.00%  mptcp_connect  [kernel.kallsyms]  [k] __napi_poll
            |
            ---__napi_poll
               process_backlog
               __netif_receive_skb_one_core
               ip_rcv
               ip_local_deliver
               ip_local_deliver_finish
               ip_protocol_deliver_rcu
               tcp_v4_rcv
               tcp_v4_do_rcv
               tcp_v4_send_reset

     6.67%     0.00%  mptcp_connect  [kernel.kallsyms]  [k] process_backlog
            |
            ---process_backlog
               __netif_receive_skb_one_core
               ip_rcv
               ip_local_deliver
               ip_local_deliver_finish
               ip_protocol_deliver_rcu
               tcp_v4_rcv
               tcp_v4_do_rcv
               tcp_v4_send_reset

     6.67%     0.00%  mptcp_connect  [kernel.kallsyms]  [k] __netif_receive_skb_one_core
            |
            ---__netif_receive_skb_one_core
               ip_rcv
               ip_local_deliver
               ip_local_deliver_finish
               ip_protocol_deliver_rcu
               tcp_v4_rcv
               tcp_v4_do_rcv
               tcp_v4_send_reset

     6.67%     0.00%  mptcp_connect  [kernel.kallsyms]  [k] ip_rcv
            |
            ---ip_rcv
               ip_local_deliver
               ip_local_deliver_finish
               ip_protocol_deliver_rcu
               tcp_v4_rcv
               tcp_v4_do_rcv
               tcp_v4_send_reset

     6.67%     0.00%  mptcp_connect  [kernel.kallsyms]  [k] ip_local_deliver
            |
            ---ip_local_deliver
               ip_local_deliver_finish
               ip_protocol_deliver_rcu
               tcp_v4_rcv
               tcp_v4_do_rcv
               tcp_v4_send_reset

     6.67%     0.00%  mptcp_connect  [kernel.kallsyms]  [k] ip_local_deliver_finish
            |
            ---ip_local_deliver_finish
               ip_protocol_deliver_rcu
               tcp_v4_rcv
               tcp_v4_do_rcv
               tcp_v4_send_reset

     6.67%     0.00%  mptcp_connect  [kernel.kallsyms]  [k] ip_protocol_deliver_rcu
            |
            ---ip_protocol_deliver_rcu
               tcp_v4_rcv
               tcp_v4_do_rcv
               tcp_v4_send_reset

     6.67%     0.00%  mptcp_connect  [kernel.kallsyms]  [k] tcp_v4_rcv
            |
            ---tcp_v4_rcv
               tcp_v4_do_rcv
               tcp_v4_send_reset

     6.67%     0.00%  mptcp_connect  [kernel.kallsyms]  [k] tcp_v4_do_rcv
            |
            ---tcp_v4_do_rcv
               tcp_v4_send_reset



#
# (Tip: Skip collecting build-id when recording: perf record -B)
#

Should I open a new ticket?

Please tell me if you need more info, e.g. perf file + vmlinux :)

jenkins-tessares pushed a commit that referenced this issue May 28, 2021
When some mapping related errors occurs we close the main
MPC subflow with a RST. We should instead fallback gracefully
to TCP, and do the reset only for MPJ subflows.

Fixes: d22f498 ("mptcp: process MP_CAPABLE data option")
Closes: #192
Reported-by: Matthieu Baerts <matthieu.baerts@tessares.net>
Reviewed-by: Mat Martineau <mathew.j.martineau@linux.intel.com>
Signed-off-by: Paolo Abeni <pabeni@redhat.com>
@pabeni
Copy link

pabeni commented May 28, 2021

=== 2556 ===
++ i=2557

As I said it was either very good or very bad. Looks it's not the first option.

Please tell me if you need more info, e.g. perf file + vmlinux :)

Yes, please: a different issues, as this is quite different: the TCP stack si sending the reset, as tcp_child_process() return an error code. Likely related to:

TcpExtSyncookiesFailed 1 0.0

(why the heck we have syncookie with a single client ?!?)

jenkins-tessares pushed a commit that referenced this issue May 29, 2021
When some mapping related errors occurs we close the main
MPC subflow with a RST. We should instead fallback gracefully
to TCP, and do the reset only for MPJ subflows.

Fixes: d22f498 ("mptcp: process MP_CAPABLE data option")
Closes: #192
Reported-by: Matthieu Baerts <matthieu.baerts@tessares.net>
Reviewed-by: Mat Martineau <mathew.j.martineau@linux.intel.com>
Signed-off-by: Paolo Abeni <pabeni@redhat.com>
jenkins-tessares pushed a commit that referenced this issue May 30, 2021
When some mapping related errors occurs we close the main
MPC subflow with a RST. We should instead fallback gracefully
to TCP, and do the reset only for MPJ subflows.

Fixes: d22f498 ("mptcp: process MP_CAPABLE data option")
Closes: #192
Reported-by: Matthieu Baerts <matthieu.baerts@tessares.net>
Reviewed-by: Mat Martineau <mathew.j.martineau@linux.intel.com>
Signed-off-by: Paolo Abeni <pabeni@redhat.com>
zx2c4-bot pushed a commit to WireGuard/wireguard-linux that referenced this issue May 30, 2021
When some mapping related errors occurs we close the main
MPC subflow with a RST. We should instead fallback gracefully
to TCP, and do the reset only for MPJ subflows.

Fixes: d22f498 ("mptcp: process MP_CAPABLE data option")
Closes: multipath-tcp/mptcp_net-next#192
Reported-by: Matthieu Baerts <matthieu.baerts@tessares.net>
Signed-off-by: Paolo Abeni <pabeni@redhat.com>
Signed-off-by: Mat Martineau <mathew.j.martineau@linux.intel.com>
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
jenkins-tessares pushed a commit that referenced this issue May 31, 2021
When some mapping related errors occurs we close the main
MPC subflow with a RST. We should instead fallback gracefully
to TCP, and do the reset only for MPJ subflows.

Fixes: d22f498 ("mptcp: process MP_CAPABLE data option")
Closes: #192
Reported-by: Matthieu Baerts <matthieu.baerts@tessares.net>
Reviewed-by: Mat Martineau <mathew.j.martineau@linux.intel.com>
Signed-off-by: Paolo Abeni <pabeni@redhat.com>
matttbe pushed a commit that referenced this issue May 31, 2021
When some mapping related errors occurs we close the main
MPC subflow with a RST. We should instead fallback gracefully
to TCP, and do the reset only for MPJ subflows.

Fixes: d22f498 ("mptcp: process MP_CAPABLE data option")
Closes: #192
Reported-by: Matthieu Baerts <matthieu.baerts@tessares.net>
Reviewed-by: Mat Martineau <mathew.j.martineau@linux.intel.com>
Signed-off-by: Paolo Abeni <pabeni@redhat.com>
jenkins-tessares pushed a commit that referenced this issue Jun 1, 2021
When some mapping related errors occurs we close the main
MPC subflow with a RST. We should instead fallback gracefully
to TCP, and do the reset only for MPJ subflows.

Fixes: d22f498 ("mptcp: process MP_CAPABLE data option")
Closes: #192
Reported-by: Matthieu Baerts <matthieu.baerts@tessares.net>
Reviewed-by: Mat Martineau <mathew.j.martineau@linux.intel.com>
Signed-off-by: Paolo Abeni <pabeni@redhat.com>
jenkins-tessares pushed a commit that referenced this issue Jun 2, 2021
When some mapping related errors occurs we close the main
MPC subflow with a RST. We should instead fallback gracefully
to TCP, and do the reset only for MPJ subflows.

Fixes: d22f498 ("mptcp: process MP_CAPABLE data option")
Closes: #192
Reported-by: Matthieu Baerts <matthieu.baerts@tessares.net>
Reviewed-by: Mat Martineau <mathew.j.martineau@linux.intel.com>
Signed-off-by: Paolo Abeni <pabeni@redhat.com>
jenkins-tessares pushed a commit that referenced this issue Jun 3, 2021
When some mapping related errors occurs we close the main
MPC subflow with a RST. We should instead fallback gracefully
to TCP, and do the reset only for MPJ subflows.

Fixes: d22f498 ("mptcp: process MP_CAPABLE data option")
Closes: #192
Reported-by: Matthieu Baerts <matthieu.baerts@tessares.net>
Reviewed-by: Mat Martineau <mathew.j.martineau@linux.intel.com>
Signed-off-by: Paolo Abeni <pabeni@redhat.com>
matttbe pushed a commit that referenced this issue Jun 3, 2021
When some mapping related errors occurs we close the main
MPC subflow with a RST. We should instead fallback gracefully
to TCP, and do the reset only for MPJ subflows.

Fixes: d22f498 ("mptcp: process MP_CAPABLE data option")
Closes: #192
Reported-by: Matthieu Baerts <matthieu.baerts@tessares.net>
Reviewed-by: Mat Martineau <mathew.j.martineau@linux.intel.com>
Signed-off-by: Paolo Abeni <pabeni@redhat.com>
jenkins-tessares pushed a commit that referenced this issue Jun 4, 2021
When some mapping related errors occurs we close the main
MPC subflow with a RST. We should instead fallback gracefully
to TCP, and do the reset only for MPJ subflows.

Fixes: d22f498 ("mptcp: process MP_CAPABLE data option")
Closes: #192
Reported-by: Matthieu Baerts <matthieu.baerts@tessares.net>
Reviewed-by: Mat Martineau <mathew.j.martineau@linux.intel.com>
Signed-off-by: Paolo Abeni <pabeni@redhat.com>
matttbe pushed a commit that referenced this issue Jun 4, 2021
When some mapping related errors occurs we close the main
MPC subflow with a RST. We should instead fallback gracefully
to TCP, and do the reset only for MPJ subflows.

Fixes: d22f498 ("mptcp: process MP_CAPABLE data option")
Closes: #192
Reported-by: Matthieu Baerts <matthieu.baerts@tessares.net>
Reviewed-by: Mat Martineau <mathew.j.martineau@linux.intel.com>
Signed-off-by: Paolo Abeni <pabeni@redhat.com>
matttbe pushed a commit that referenced this issue Jun 4, 2021
When some mapping related errors occurs we close the main
MPC subflow with a RST. We should instead fallback gracefully
to TCP, and do the reset only for MPJ subflows.

Fixes: d22f498 ("mptcp: process MP_CAPABLE data option")
Closes: #192
Reported-by: Matthieu Baerts <matthieu.baerts@tessares.net>
Reviewed-by: Mat Martineau <mathew.j.martineau@linux.intel.com>
Signed-off-by: Paolo Abeni <pabeni@redhat.com>
matttbe pushed a commit that referenced this issue Jun 4, 2021
When some mapping related errors occurs we close the main
MPC subflow with a RST. We should instead fallback gracefully
to TCP, and do the reset only for MPJ subflows.

Fixes: d22f498 ("mptcp: process MP_CAPABLE data option")
Closes: #192
Reported-by: Matthieu Baerts <matthieu.baerts@tessares.net>
Reviewed-by: Mat Martineau <mathew.j.martineau@linux.intel.com>
Signed-off-by: Paolo Abeni <pabeni@redhat.com>
matttbe pushed a commit that referenced this issue Jun 4, 2021
When some mapping related errors occurs we close the main
MPC subflow with a RST. We should instead fallback gracefully
to TCP, and do the reset only for MPJ subflows.

Fixes: d22f498 ("mptcp: process MP_CAPABLE data option")
Closes: #192
Reported-by: Matthieu Baerts <matthieu.baerts@tessares.net>
Reviewed-by: Mat Martineau <mathew.j.martineau@linux.intel.com>
Signed-off-by: Paolo Abeni <pabeni@redhat.com>
matttbe pushed a commit that referenced this issue Jun 4, 2021
When some mapping related errors occurs we close the main
MPC subflow with a RST. We should instead fallback gracefully
to TCP, and do the reset only for MPJ subflows.

Fixes: d22f498 ("mptcp: process MP_CAPABLE data option")
Closes: #192
Reported-by: Matthieu Baerts <matthieu.baerts@tessares.net>
Reviewed-by: Mat Martineau <mathew.j.martineau@linux.intel.com>
Signed-off-by: Paolo Abeni <pabeni@redhat.com>
jenkins-tessares pushed a commit that referenced this issue Jun 5, 2021
When some mapping related errors occurs we close the main
MPC subflow with a RST. We should instead fallback gracefully
to TCP, and do the reset only for MPJ subflows.

Fixes: d22f498 ("mptcp: process MP_CAPABLE data option")
Closes: #192
Reported-by: Matthieu Baerts <matthieu.baerts@tessares.net>
Reviewed-by: Mat Martineau <mathew.j.martineau@linux.intel.com>
Signed-off-by: Paolo Abeni <pabeni@redhat.com>
matttbe pushed a commit that referenced this issue Jun 7, 2021
When some mapping related errors occurs we close the main
MPC subflow with a RST. We should instead fallback gracefully
to TCP, and do the reset only for MPJ subflows.

Fixes: d22f498 ("mptcp: process MP_CAPABLE data option")
Closes: #192
Reported-by: Matthieu Baerts <matthieu.baerts@tessares.net>
Reviewed-by: Mat Martineau <mathew.j.martineau@linux.intel.com>
Signed-off-by: Paolo Abeni <pabeni@redhat.com>
matttbe pushed a commit that referenced this issue Jun 7, 2021
When some mapping related errors occurs we close the main
MPC subflow with a RST. We should instead fallback gracefully
to TCP, and do the reset only for MPJ subflows.

Fixes: d22f498 ("mptcp: process MP_CAPABLE data option")
Closes: #192
Reported-by: Matthieu Baerts <matthieu.baerts@tessares.net>
Reviewed-by: Mat Martineau <mathew.j.martineau@linux.intel.com>
Signed-off-by: Paolo Abeni <pabeni@redhat.com>
matttbe pushed a commit that referenced this issue Jun 7, 2021
When some mapping related errors occurs we close the main
MPC subflow with a RST. We should instead fallback gracefully
to TCP, and do the reset only for MPJ subflows.

Fixes: d22f498 ("mptcp: process MP_CAPABLE data option")
Closes: #192
Reported-by: Matthieu Baerts <matthieu.baerts@tessares.net>
Reviewed-by: Mat Martineau <mathew.j.martineau@linux.intel.com>
Signed-off-by: Paolo Abeni <pabeni@redhat.com>
matttbe pushed a commit that referenced this issue Jun 7, 2021
When some mapping related errors occurs we close the main
MPC subflow with a RST. We should instead fallback gracefully
to TCP, and do the reset only for MPJ subflows.

Fixes: d22f498 ("mptcp: process MP_CAPABLE data option")
Closes: #192
Reported-by: Matthieu Baerts <matthieu.baerts@tessares.net>
Reviewed-by: Mat Martineau <mathew.j.martineau@linux.intel.com>
Signed-off-by: Paolo Abeni <pabeni@redhat.com>
matttbe pushed a commit that referenced this issue Jun 7, 2021
When some mapping related errors occurs we close the main
MPC subflow with a RST. We should instead fallback gracefully
to TCP, and do the reset only for MPJ subflows.

Fixes: d22f498 ("mptcp: process MP_CAPABLE data option")
Closes: #192
Reported-by: Matthieu Baerts <matthieu.baerts@tessares.net>
Reviewed-by: Mat Martineau <mathew.j.martineau@linux.intel.com>
Signed-off-by: Paolo Abeni <pabeni@redhat.com>
hamadmarri pushed a commit to hamadmarri/linux that referenced this issue Jun 9, 2021
[ Upstream commit dea2b1e ]

When some mapping related errors occurs we close the main
MPC subflow with a RST. We should instead fallback gracefully
to TCP, and do the reset only for MPJ subflows.

Fixes: d22f498 ("mptcp: process MP_CAPABLE data option")
Closes: multipath-tcp/mptcp_net-next#192
Reported-by: Matthieu Baerts <matthieu.baerts@tessares.net>
Signed-off-by: Paolo Abeni <pabeni@redhat.com>
Signed-off-by: Mat Martineau <mathew.j.martineau@linux.intel.com>
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
Signed-off-by: Sasha Levin <sashal@kernel.org>
woodsts pushed a commit to woodsts/linux-stable that referenced this issue Jun 10, 2021
[ Upstream commit dea2b1e ]

When some mapping related errors occurs we close the main
MPC subflow with a RST. We should instead fallback gracefully
to TCP, and do the reset only for MPJ subflows.

Fixes: d22f498 ("mptcp: process MP_CAPABLE data option")
Closes: multipath-tcp/mptcp_net-next#192
Reported-by: Matthieu Baerts <matthieu.baerts@tessares.net>
Signed-off-by: Paolo Abeni <pabeni@redhat.com>
Signed-off-by: Mat Martineau <mathew.j.martineau@linux.intel.com>
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
Signed-off-by: Sasha Levin <sashal@kernel.org>
spoorva pushed a commit to spoorva/packetdrill that referenced this issue Jul 27, 2021
This explicitly cover the critical scenario triggering:

multipath-tcp/mptcp_net-next#192

and should fail till the kernel patch:

"mptcp: always parse mptcp options for MPC reqsk"

is merged

Signed-off-by: Paolo Abeni <pabeni@redhat.com>
spoorva pushed a commit to spoorva/packetdrill that referenced this issue Jul 27, 2021
This explicitly cover the critical scenario triggering:

multipath-tcp/mptcp_net-next#192

and should fail till the kernel patch:

"mptcp: always parse mptcp options for MPC reqsk"

is merged

Signed-off-by: Paolo Abeni <pabeni@redhat.com>
spoorva pushed a commit to spoorva/packetdrill that referenced this issue Aug 23, 2021
This explicitly cover the critical scenario triggering:

multipath-tcp/mptcp_net-next#192

and should fail till the kernel patch:

"mptcp: always parse mptcp options for MPC reqsk"

is merged

Signed-off-by: Paolo Abeni <pabeni@redhat.com>
fengguang pushed a commit to 0day-ci/linux that referenced this issue Oct 9, 2021
When some mapping related errors occurs we close the main
MPC subflow with a RST. We should instead fallback gracefully
to TCP, and do the reset only for MPJ subflows.

Fixes: d22f498 ("mptcp: process MP_CAPABLE data option")
Closes: multipath-tcp/mptcp_net-next#192
Reported-by: Matthieu Baerts <matthieu.baerts@tessares.net>
Signed-off-by: Paolo Abeni <pabeni@redhat.com>
Signed-off-by: Mat Martineau <mathew.j.martineau@linux.intel.com>
matttbe pushed a commit that referenced this issue Dec 11, 2021
This patch tests bpf_loop in pyperf and strobemeta, and measures the
verifier performance of replacing the traditional for loop
with bpf_loop.

The results are as follows:

~strobemeta~

Baseline
    verification time 6808200 usec
    stack depth 496
    processed 554252 insns (limit 1000000) max_states_per_insn 16
    total_states 15878 peak_states 13489  mark_read 3110
    #192 verif_scale_strobemeta:OK (unrolled loop)

Using bpf_loop
    verification time 31589 usec
    stack depth 96+400
    processed 1513 insns (limit 1000000) max_states_per_insn 2
    total_states 106 peak_states 106 mark_read 60
    #193 verif_scale_strobemeta_bpf_loop:OK

~pyperf600~

Baseline
    verification time 29702486 usec
    stack depth 368
    processed 626838 insns (limit 1000000) max_states_per_insn 7
    total_states 30368 peak_states 30279 mark_read 748
    #182 verif_scale_pyperf600:OK (unrolled loop)

Using bpf_loop
    verification time 148488 usec
    stack depth 320+40
    processed 10518 insns (limit 1000000) max_states_per_insn 10
    total_states 705 peak_states 517 mark_read 38
    #183 verif_scale_pyperf600_bpf_loop:OK

Using the bpf_loop helper led to approximately a 99% decrease
in the verification time and in the number of instructions.

Signed-off-by: Joanne Koong <joannekoong@fb.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Acked-by: Andrii Nakryiko <andrii@kernel.org>
Link: https://lore.kernel.org/bpf/20211130030622.4131246-4-joannekoong@fb.com
jenkins-tessares pushed a commit that referenced this issue Dec 22, 2022
The current walk_stackframe with FRAME_POINTER would stop unwinding at
ret_from_exception:
  BUG: sleeping function called from invalid context at kernel/locking/rwsem.c:1518
  in_atomic(): 0, irqs_disabled(): 1, non_block: 0, pid: 1, name: init
  CPU: 0 PID: 1 Comm: init Not tainted 5.10.113-00021-g15c15974895c-dirty #192
  Call Trace:
  [<ffffffe0002038c8>] walk_stackframe+0x0/0xee
  [<ffffffe000aecf48>] show_stack+0x32/0x4a
  [<ffffffe000af1618>] dump_stack_lvl+0x72/0x8e
  [<ffffffe000af1648>] dump_stack+0x14/0x1c
  [<ffffffe000239ad2>] ___might_sleep+0x12e/0x138
  [<ffffffe000239aec>] __might_sleep+0x10/0x18
  [<ffffffe000afe3fe>] down_read+0x22/0xa4
  [<ffffffe000207588>] do_page_fault+0xb0/0x2fe
  [<ffffffe000201b80>] ret_from_exception+0x0/0xc

The optimization would help walk_stackframe cross the pt_regs frame and
get more backtrace of debug info:
  BUG: sleeping function called from invalid context at kernel/locking/rwsem.c:1518
  in_atomic(): 0, irqs_disabled(): 1, non_block: 0, pid: 1, name: init
  CPU: 0 PID: 1 Comm: init Not tainted 5.10.113-00021-g15c15974895c-dirty #192
  Call Trace:
  [<ffffffe0002038c8>] walk_stackframe+0x0/0xee
  [<ffffffe000aecf48>] show_stack+0x32/0x4a
  [<ffffffe000af1618>] dump_stack_lvl+0x72/0x8e
  [<ffffffe000af1648>] dump_stack+0x14/0x1c
  [<ffffffe000239ad2>] ___might_sleep+0x12e/0x138
  [<ffffffe000239aec>] __might_sleep+0x10/0x18
  [<ffffffe000afe3fe>] down_read+0x22/0xa4
  [<ffffffe000207588>] do_page_fault+0xb0/0x2fe
  [<ffffffe000201b80>] ret_from_exception+0x0/0xc
  [<ffffffe000613c06>] riscv_intc_irq+0x1a/0x72
  [<ffffffe000201b80>] ret_from_exception+0x0/0xc
  [<ffffffe00033f44a>] vma_link+0x54/0x160
  [<ffffffe000341d7a>] mmap_region+0x2cc/0x4d0
  [<ffffffe000342256>] do_mmap+0x2d8/0x3ac
  [<ffffffe000326318>] vm_mmap_pgoff+0x70/0xb8
  [<ffffffe00032638a>] vm_mmap+0x2a/0x36
  [<ffffffe0003cfdde>] elf_map+0x72/0x84
  [<ffffffe0003d05f8>] load_elf_binary+0x69a/0xec8
  [<ffffffe000376240>] bprm_execve+0x246/0x53a
  [<ffffffe00037786c>] kernel_execve+0xe8/0x124
  [<ffffffe000aecdf2>] run_init_process+0xfa/0x10c
  [<ffffffe000aece16>] try_to_run_init_process+0x12/0x3c
  [<ffffffe000afa920>] kernel_init+0xb4/0xf8
  [<ffffffe000201b80>] ret_from_exception+0x0/0xc

Here is the error injection test code for the above output:
 drivers/irqchip/irq-riscv-intc.c:
 static asmlinkage void riscv_intc_irq(struct pt_regs *regs)
 {
        unsigned long cause = regs->cause & ~CAUSE_IRQ_FLAG;
+       u32 tmp; __get_user(tmp, (u32 *)0);

Signed-off-by: Guo Ren <guoren@linux.alibaba.com>
Signed-off-by: Guo Ren <guoren@kernel.org>
Link: https://lore.kernel.org/r/20221109064937.3643993-3-guoren@kernel.org
[Palmer: use SYM_CODE_*]
Signed-off-by: Palmer Dabbelt <palmer@rivosinc.com>
matttbe pushed a commit that referenced this issue Nov 24, 2023
Reduce verboseness of test_progs' output in reg_bounds set of tests with
two changes.

First, instead of each different operator (<, <=, >, ...) being it's own
subtest, combine all different ops for the same (x, y, init_t, cond_t)
values into single subtest. Instead of getting 6 subtests, we get one
generic one, e.g.:

  #192/53  reg_bounds_crafted/(s64)[0xffffffffffffffff; 0] (s64)<op> 0xffffffff00000000:OK

Second, for random generated test cases, treat all of them as a single
test to eliminate very verbose output with random values in them. So now
we'll just get one line per each combination of (init_t, cond_t),
instead of 6 x 25 = 150 subtests before this change:

  #225     reg_bounds_rand_consts_s32_s32:OK

Given we reduce verboseness so much, it makes sense to do a bit more
random testing, so we also bump default number of random tests to 100,
up from 25. This doesn't increase runtime significantly, especially in
parallelized mode.

With all the above changes we still make sure that we have all the
information necessary for reproducing test case if it happens to fail.
That includes reporting random seed and specific operator that is
failing. Those will only be printed to console if related test/subtest
fails, so it doesn't have any added verboseness implications.

Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
Link: https://lore.kernel.org/r/20231120180452.145849-1-andrii@kernel.org
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
matttbe pushed a commit that referenced this issue Feb 9, 2024
With default config, the value of NR_CPUS is 64. When HW platform has
more then 64 cpus, system will crash on these platforms. MAX_CORE_PIC
is the maximum cpu number in MADT table (max physical number) which can
exceed the supported maximum cpu number (NR_CPUS, max logical number),
but kernel should not crash. Kernel should boot cpus with NR_CPUS, let
the remainder cpus stay in BIOS.

The potential crash reason is that the array acpi_core_pic[NR_CPUS] can
be overflowed when parsing MADT table, and it is obvious that CORE_PIC
should be corresponding to physical core rather than logical core, so it
is better to define the array as acpi_core_pic[MAX_CORE_PIC].

With the patch, system can boot up 64 vcpus with qemu parameter -smp 128,
otherwise system will crash with the following message.

[    0.000000] CPU 0 Unable to handle kernel paging request at virtual address 0000420000004259, era == 90000000037a5f0c, ra == 90000000037a46ec
[    0.000000] Oops[#1]:
[    0.000000] CPU: 0 PID: 0 Comm: swapper Not tainted 6.8.0-rc2+ #192
[    0.000000] Hardware name: QEMU QEMU Virtual Machine, BIOS unknown 2/2/2022
[    0.000000] pc 90000000037a5f0c ra 90000000037a46ec tp 9000000003c90000 sp 9000000003c93d60
[    0.000000] a0 0000000000000019 a1 9000000003d93bc0 a2 0000000000000000 a3 9000000003c93bd8
[    0.000000] a4 9000000003c93a74 a5 9000000083c93a67 a6 9000000003c938f0 a7 0000000000000005
[    0.000000] t0 0000420000004201 t1 0000000000000000 t2 0000000000000001 t3 0000000000000001
[    0.000000] t4 0000000000000003 t5 0000000000000000 t6 0000000000000030 t7 0000000000000063
[    0.000000] t8 0000000000000014 u0 ffffffffffffffff s9 0000000000000000 s0 9000000003caee98
[    0.000000] s1 90000000041b0480 s2 9000000003c93da0 s3 9000000003c93d98 s4 9000000003c93d90
[    0.000000] s5 9000000003caa000 s6 000000000a7fd000 s7 000000000f556b60 s8 000000000e0a4330
[    0.000000]    ra: 90000000037a46ec platform_init+0x214/0x250
[    0.000000]   ERA: 90000000037a5f0c efi_runtime_init+0x30/0x94
[    0.000000]  CRMD: 000000b0 (PLV0 -IE -DA +PG DACF=CC DACM=CC -WE)
[    0.000000]  PRMD: 00000000 (PPLV0 -PIE -PWE)
[    0.000000]  EUEN: 00000000 (-FPE -SXE -ASXE -BTE)
[    0.000000]  ECFG: 00070800 (LIE=11 VS=7)
[    0.000000] ESTAT: 00010000 [PIL] (IS= ECode=1 EsubCode=0)
[    0.000000]  BADV: 0000420000004259
[    0.000000]  PRID: 0014c010 (Loongson-64bit, Loongson-3A5000)
[    0.000000] Modules linked in:
[    0.000000] Process swapper (pid: 0, threadinfo=(____ptrval____), task=(____ptrval____))
[    0.000000] Stack : 9000000003c93a14 9000000003800898 90000000041844f8 90000000037a46ec
[    0.000000]         000000000a7fd000 0000000008290000 0000000000000000 0000000000000000
[    0.000000]         0000000000000000 0000000000000000 00000000019d8000 000000000f556b60
[    0.000000]         000000000a7fd000 000000000f556b08 9000000003ca7700 9000000003800000
[    0.000000]         9000000003c93e50 9000000003800898 9000000003800108 90000000037a484c
[    0.000000]         000000000e0a4330 000000000f556b60 000000000a7fd000 000000000f556b08
[    0.000000]         9000000003ca7700 9000000004184000 0000000000200000 000000000e02b018
[    0.000000]         000000000a7fd000 90000000037a0790 9000000003800108 0000000000000000
[    0.000000]         0000000000000000 000000000e0a4330 000000000f556b60 000000000a7fd000
[    0.000000]         000000000f556b08 000000000eaae298 000000000eaa5040 0000000000200000
[    0.000000]         ...
[    0.000000] Call Trace:
[    0.000000] [<90000000037a5f0c>] efi_runtime_init+0x30/0x94
[    0.000000] [<90000000037a46ec>] platform_init+0x214/0x250
[    0.000000] [<90000000037a484c>] setup_arch+0x124/0x45c
[    0.000000] [<90000000037a0790>] start_kernel+0x90/0x670
[    0.000000] [<900000000378b0d8>] kernel_entry+0xd8/0xdc

Signed-off-by: Bibo Mao <maobibo@loongson.cn>
Signed-off-by: Huacai Chen <chenhuacai@loongson.cn>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
No open projects
MPTCP Bugs
  
Closed
Development

No branches or pull requests

2 participants