Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Stream fails with IPv6 candidate #104

Open
thinkski opened this issue Sep 5, 2019 · 0 comments
Open

Stream fails with IPv6 candidate #104

thinkski opened this issue Sep 5, 2019 · 0 comments
Assignees
Labels
bug Something isn't working

Comments

@thinkski
Copy link
Member

thinkski commented Sep 5, 2019

With an IPv6 candidate, stream fails after a few seconds of streaming (following is from add-audio-support -- underrun errors should be unrelated to this issue):

lanikai@farkle:~ $ ./alohartcd -ipv6=0
Open https://farkle.local:8000/
2019-09-05 02:05:03.017 I/ice[base.go:111] Listening on udp/192.168.7.1:46630
2019-09-05 02:05:03.024 I/ice[base.go:111] Listening on udp/169.254.50.166:39603
2019-09-05 02:05:03.029 I/ice[base.go:111] Listening on udp/192.168.1.190:45534
2019-09-05 02:05:03.035 I/ice[base.go:111] Listening on udp/[2600:1700:d2b1:23b0::3e]:42149
2019-09-05 02:05:03.039 I/ice[base.go:111] Listening on udp/[2600:1700:d2b1:23b0:fe10:b5d0:733e:6375]:58655
2019-09-05 02:05:03.045 I/ice[agent.go:132] Remote ICE candidate:2169544139 1 udp 2113937151 10.6.5.193 61824 typ host generation 0 ufrag CJP1 network-cost 999
2019-09-05 02:05:03.051 I/ice[agent.go:132] Remote ICE candidate:2169544139 1 udp 2113937151 10.6.5.193 60143 typ host generation 0 ufrag CJP1 network-cost 999
2019-09-05 02:05:03.055 I/ice[agent.go:160] Local ICE candidate:ABQ5XQJR 1 udp 2130706431 2600:1700:d2b1:23b0:fe10:b5d0:733e:6375 58655 typ host
2019-09-05 02:05:03.064 I/ice[agent.go:160] Local ICE candidate:SNJVDD3K 1 udp 2130706175 192.168.7.1 46630 typ host
2019-09-05 02:05:03.071 I/ice[agent.go:160] Local ICE candidate:FBKS3SDC 1 udp 2130705919 169.254.50.166 39603 typ host
2019-09-05 02:05:03.077 I/ice[agent.go:160] Local ICE candidate:MBTOTDBL 1 udp 2130705663 192.168.1.190 45534 typ host
2019-09-05 02:05:03.083 I/ice[agent.go:160] Local ICE candidate:DOFO6YXQ 1 udp 2130706175 2600:1700:d2b1:23b0::3e 42149 typ host
2019-09-05 02:05:03.1-0 I/ice[agent.go:132] Remote ICE candidate:842163049 1 udp 1677729535 172.83.43.133 61824 typ srflx raddr 10.6.5.193 rport 61824 generation 0 ufrag CJP1 network-cost 999
2019-09-05 02:05:03.157 I/ice[checklist.go:327] Selected Pair#4: udp/[2600:1700:d2b1:23b0:fe10:b5d0:733e:6375]:58655 -> udp/[2600:1700:d2b1:23b0:8194:e19e:13fd:ccc2]:61825 [Succeeded]
2019-09-05 02:05:03.19- I/ice[agent.go:160] Local ICE candidate:ZO5FFM76 1 udp 1694497535 104.60.125.95 45534 typ srflx raddr 0.0.0.0 rport 0
ALSA lib pcm.c:8306:(snd_pcm_recover) underrun occurred
ALSA lib pcm.c:8306:(snd_pcm_recover) underrun occurred
ALSA lib pcm.c:8306:(snd_pcm_recover) underrun occurred
ALSA lib pcm.c:8306:(snd_pcm_recover) underrun occurred
ALSA lib pcm.c:8306:(snd_pcm_recover) underrun occurred
ALSA lib pcm.c:8306:(snd_pcm_recover) underrun occurred
ALSA lib pcm.c:8306:(snd_pcm_recover) underrun occurred
ALSA lib pcm.c:8306:(snd_pcm_recover) underrun occurred
ALSA lib pcm.c:8306:(snd_pcm_recover) underrun occurred
ALSA lib pcm.c:8306:(snd_pcm_recover) underrun occurred
ALSA lib pcm.c:8306:(snd_pcm_recover) underrun occurred
ALSA lib pcm.c:8306:(snd_pcm_recover) underrun occurred
ALSA lib pcm.c:8306:(snd_pcm_recover) underrun occurred
ALSA lib pcm.c:8306:(snd_pcm_recover) underrun occurred
ALSA lib pcm.c:8306:(snd_pcm_recover) underrun occurred2019-09-05 02:05:08.208 I/alohartc[peer_connection.go:625] datastream done

2019-09-05 02:05:08.21- I/alohartc[peer_connection.go:586] EOF
2019/09/05 02:05:08.216117 main.go:205: ice: read timeout
2019-09-05 02:05:08.217 I/alohartc[peer_connection.go:632] Closing peer connection
2019-09-05 02:05:08.249 I/v4l2[source.go:56] EOF
2019-09-05 02:05:09.47- W/ice[base.go:282] Dropping data packet (first byte 80) because reader cannot keep up

Believe this is similar to previous issue seen with different candidates being selected by local and remote. In this case, that issue has been fixed (i.e. local candidate priorities are in fact different) so something else must be causing this.

@thinkski thinkski added the bug Something isn't working label Sep 5, 2019
@thinkski thinkski self-assigned this Sep 5, 2019
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

1 participant