Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

ST: Reliance on EAGAIN scheduling results in significant timer error in RTC scenarios. #2194

Closed
winlinvip opened this issue Feb 10, 2021 · 9 comments
Assignees
Labels
Discussion Discussion or questions. Enhancement Improvement or enhancement. TransByAI Translated by AI/GPT. WebRTC WebRTC, RTC2RTMP or RTMP2RTC.
Milestone

Comments

@winlinvip
Copy link
Member

winlinvip commented Feb 10, 2021

ST now only switches to other coroutines when the io function encounters EAGAIN, which is undoubtedly the most efficient approach. However, when there are a large number of packages to be processed, it can lead to significant timer errors.

  1. At this point, essentially, the concurrency has exceeded the system's processing capacity. For example, if the timer error exceeds 80ms, flow control should be implemented because it is not possible to handle so many packages and starve the timer coroutine and other coroutines.
  2. If the timer requires higher priority, then the scheduler still needs to consider voluntarily yielding a certain amount of time after a certain time slice, rather than solely relying on EAGAIN.

In the ST code, the number of scheduler switches and the time difference when entering the timer are counted. For example, during busy system times:

Hybrid cpu=93.39%,1042MB, udp=60995,606,6633,0, epoll=606,23,23,1
, sched=536,43,4,21,1,1,0,0,0, clock=1,1,1,1,2,10,9,1,0

Note: The specific log format and definitions may have slight adjustments, please refer to the code for accuracy.

  • cpu=93.39% - This is the CPU usage rate, which is quite high.
  • epoll=606 - This indicates an average of 606 calls to epoll_wait per second. Note that this is an average and the distribution may not be evenly balanced.
  • udp=60995,606 - This means that on average, recvfrom is called 60,000 times per second, with 606 of those times resulting in EAGAIN, indicating a switch to other coroutines by the scheduler.
  • sched=536,43,4,21,1,1 - This represents the scheduler's errors (maximum possible error for timers). Out of 536 occurrences, most are within 1ms (nanosecond level), 43 are within 10ms, 4 are within 40ms, 21 are exactly 40ms, 1 is within 80ms, and 1 is within 160ms.
  • clock=1,1,1,1,2,10,9,1 - This indicates the errors in the system's 20ms precision timer. Out of the occurrences, 1 is within 15ms, 1 is within 20ms, 1 is within 25ms, 1 is within 30ms, 2 are within 35ms, 10 are within 40ms, 9 are within 80ms, and 1 is within 160ms.

Please make sure to maintain the markdown structure.

If we have a 20ms timer enabled in an RTC (Real-Time Clock) scenario, we need to ensure that the distribution of clock is within the range of 15 to 25ms. This is necessary to guarantee the accuracy of this timer.

Please make sure to maintain the markdown structure.

If we have multiple timers at the 20ms level enabled, there will be cumulative errors (switching to the execution of different timers will affect the next timer).

Please make sure to maintain the markdown structure.

In a live streaming scenario with a 5-second delay, it is not important, but in an RTC scenario where timers require an error within 20ms, it is a critical issue.

Please make sure to maintain the markdown structure.

Approach to the Solution

Please make sure to maintain the markdown structure.

Currently, I am considering the following approach to the solution. If anyone has a better plan, please leave a comment in the comment section.

  • Reduce the number of high-precision timers in RTC to avoid cumulative errors.
  • The scheduler of ST needs improvement and should not only schedule on EAGAIN. There are the following approaches to this problem:
    • Actively schedule at the application layer, for example, by determining if the system is busy and there are high-precision timers present, actively throttle and switch at the application layer.
    • Consider fairness in scheduling within ST, allocate some time slices to high-priority timers, although the current coroutine does not have a concept of priority.

Make sure to maintain the markdown structure.

TRANS_BY_GPT3

@winlinvip winlinvip added the Discussion Discussion or questions. label Feb 10, 2021
@winlinvip
Copy link
Member Author

winlinvip commented Feb 11, 2021

image

SRS 4.0.71, when the system is idle, the timer is accurate. Setting a 20ms timer, the triggering interval is basically within 20~25ms.

TRANS_BY_GPT3

@winlinvip
Copy link
Member Author

winlinvip commented Feb 11, 2021

image

SRS 4.0.71, when the system is under heavy load, the timer error is significant. When setting a 20ms timer, there is only a 50% chance that the trigger interval will be within 20-25ms, and in the worst case, it can be triggered at 160ms. Use srs-bench to simulate 600 streaming channels.

TRANS_BY_GPT3

@winlinvip
Copy link
Member Author

winlinvip commented Feb 11, 2021

Adjust the priority of the timer coroutine and put it at the head of the running queue, which means executing the timer first. Reference: ossrs/state-threads@9fe8cfe

The timer is placed at the end of the RunQ, which means executing the IO coroutine first and then the timer coroutine.
cpu=48.38%, timer=39,0,0, io=47,44,0,0,10669,109, epoll=84,3,3,25, sched=44,27,4,1,1,1,3,0,0, clock=1,13,10,4,1,1,5,1,0, co=7792,10755,84
cpu=72.62%, timer=39,0,0, io=47,44,0,0,10669,109, epoll=84,3,3,25, sched=44,27,4,1,1,1,3,0,0, clock=1,13,10,4,1,1,5,1,0, co=7792,10755,84
cpu=66.56%, timer=43,0,0, io=51,47,0,0,8505,322, epoll=296,20,20,19, sched=280,9,1,1,2,1,1,0,0, clock=1,21,9,2,1,1,3,0,0, co=7792,8469,296
cpu=61.17%, timer=43,0,0, io=51,47,0,0,8505,322, epoll=296,20,20,19, sched=280,9,1,1,2,1,1,0,0, clock=1,21,9,2,1,1,3,0,0, co=7792,8469,296
cpu=61.40%, timer=43,0,0, io=52,47,0,0,9505,250, epoll=253,18,18,23, sched=232,15,1,1,2,1,1,0,0, clock=1,21,8,3,2,1,3,1,0, co=7792,9478,253
cpu=50.31%, timer=43,0,0, io=52,47,0,0,9505,250, epoll=253,18,18,23, sched=232,15,1,1,2,1,1,0,0, clock=1,21,8,3,2,1,3,1,0, co=7792,9478,253
The timer is placed at the beginning of the RunQ, which means executing the timer coroutine first and then the IO coroutine.
cpu=47.32%, timer=36,0,0, io=42,39,0,0,11200,0, epoll=64,1,1,24, sched=25,26,6,2,1,1,2,1,0, clock=0,6,16,4,3,1,1,1,0, co=7980,11279,64
cpu=57.00%, timer=36,0,0, io=42,39,0,0,11200,0, epoll=64,1,1,24, sched=25,26,6,2,1,1,2,1,0, clock=0,6,16,4,3,1,1,1,0, co=7980,11279,64
cpu=64.40%, timer=42,0,0, io=51,47,0,0,9104,315, epoll=219,16,16,23, sched=194,20,1,1,1,1,2,1,0, clock=1,18,12,3,1,0,2,1,0, co=7980,9152,219
cpu=66.20%, timer=42,0,0, io=51,47,0,0,9104,315, epoll=219,16,16,23, sched=194,20,1,1,1,1,2,1,0, clock=1,18,12,3,1,0,2,1,0, co=7980,9152,219
cpu=62.95%, timer=41,0,0, io=51,46,0,0,10022,257, epoll=256,16,16,19, sched=231,20,1,1,1,1,2,0,0, clock=1,16,10,5,2,1,2,0,0, co=7980,9944,256
cpu=46.91%, timer=41,0,0, io=51,46,0,0,10022,257, epoll=256,16,16,19, sched=231,20,1,1,1,1,2,0,0, clock=1,16,10,5,2,1,2,0,0, co=7980,9944,256

The above data is for a small number of timers, one stream push, and 4000 clients playing.

From the results, there is no difference because whether the timer is executed first or executed later, there is essentially no difference. This is because within one scheduling cycle, all coroutines are placed in the RunQ and executed together. Therefore, the timer has an equal chance and time for execution.

However, executing the timer first allows us to have the opportunity to let other coroutines voluntarily yield their time slices. This allows the timer to have more opportunities for execution, enabling it to be executed multiple times within one scheduling cycle and reducing the error of the timer.

TRANS_BY_GPT3

@winlinvip
Copy link
Member Author

winlinvip commented Feb 11, 2021

One important factor contributing to timer error is the busy file descriptor, for example:

FD1 has a large amount of data to read and write, and it takes many read/write operations before it enters EAGAIN and switches to the next coroutine. 
FD2 is the next coroutine, and it also has a lot of data to read and write, requiring multiple IO operations before switching.
The timer is the next coroutine, and it is likely to have already timed out, but it may have already exceeded the timeout by a significant amount of time, for example, a 20ms timer may have already passed 40ms or even more.

We previously modified ST and moved the timer to the front, so the sequence below is as follows:

Continuing from the previous scheduling cycle, due to the extensive use of time slices by the IO coroutine for reading and writing, for example, 40ms.
The first coroutine, the Timer, is definitely already timed out, and it has been timed out for a long time, such as a 20ms timer, which may have already passed 40ms or even more.

The following FD coroutine affects the Timer of the next scheduling cycle.
FD1 has a large amount of data reading and writing, and it enters EAGAIN after many read/write operations before switching to the next coroutine.
FD2 is the next coroutine, and it also has a lot of data reading and writing, and it switches after the same number of IO operations.

If we improve it, in the IO coroutine, if we proactively interrupt the time slice to check the timer, and execute the timer if it expires, then the sequence will be like this:

Timer is the first coroutine, assuming it has not timed out.

FD1 has a large amount of IO and needs to perform multiple read/write operations. Assuming it executes the first round, for example, writev(1000 iovec), then:

The coroutine of FD1 calls yield, voluntarily giving up the main time slice, checking the timer, and putting itself at the end of the RunQ.

# When yielding, check the timer. If there is a timeout, enter the Timer coroutine to execute, and then move on to the next coroutine.

FD2 is the next coroutine, just like FD1, it may also switch to Timer execution.

FD1 will be executed again because when it yielded, it placed itself at the end of the RunQ, so it will run again.

Improved strategy:

  1. Users need to actively call the srs_thread_yield function to yield the current coroutine, check and execute the timer, and place the coroutine at the end of the RunQ. It will continue executing this coroutine at the end of the scheduling cycle.
  2. In scenarios with a large number of FDs or heavy IO (such as receiving UDP packets from RTC), the timer can obtain more time slices, effectively reducing timer errors.
  3. By actively yielding the time slice, the coroutine only needs to be placed in the RunQ, resulting in significantly lower overhead compared to sleep(0) and ensuring that the yielded coroutine gets scheduled and executed before other coroutines, minimizing performance loss.

Here are the test results. Similar to above, there are 4000 RTMP players. It can be seen that the timers are almost always within 25ms, with about 50 timer events per second.

After improvement, the IO-intensive coroutine is optimized by actively yielding to give up scheduling.
cpu=68.98%, timer=49, io=48,41,0,0,9815,217, epoll=146,13,13,30, sched=9721,18,1,0,0,0,0,0,0, clock=0,30,14,1,1,0,0,0,0, co=7808,19489,146,9595
cpu=66.31%, timer=51, io=55,50,0,0,8643,355, epoll=305,23,23,23, sched=8677,4,0,0,0,0,0,0,0, clock=0,36,11,1,1,0,0,0,0, co=7806,16961,305,8376
cpu=61.63%, timer=51, io=55,50,0,0,8643,355, epoll=305,23,23,23, sched=8677,4,0,0,0,0,0,0,0, clock=0,36,11,1,1,0,0,0,0, co=7806,16961,305,8376
cpu=56.57%, timer=50, io=57,51,0,0,9509,263, epoll=261,22,22,29, sched=9532,7,0,0,0,0,0,0,0, clock=0,37,10,1,0,0,0,0,0, co=7806,18793,261,9277
cpu=60.98%, timer=50, io=57,51,0,0,9509,263, epoll=261,22,22,29, sched=9532,7,0,0,0,0,0,0,0, clock=0,37,10,1,0,0,0,0,0, co=7806,18793,261,9277

Comparing with the data before improvement, there were only 41 timer events per second (normally should be 50).

Before the improvement, only the Timer was placed at the head of the RunQ.
cpu=47.32%, timer=36,0,0, io=42,39,0,0,11200,0, epoll=64,1,1,24, sched=25,26,6,2,1,1,2,1,0, clock=0,6,16,4,3,1,1,1,0, co=7980,11279,64
cpu=57.00%, timer=36,0,0, io=42,39,0,0,11200,0, epoll=64,1,1,24, sched=25,26,6,2,1,1,2,1,0, clock=0,6,16,4,3,1,1,1,0, co=7980,11279,64
cpu=64.40%, timer=42,0,0, io=51,47,0,0,9104,315, epoll=219,16,16,23, sched=194,20,1,1,1,1,2,1,0, clock=1,18,12,3,1,0,2,1,0, co=7980,9152,219
cpu=66.20%, timer=42,0,0, io=51,47,0,0,9104,315, epoll=219,16,16,23, sched=194,20,1,1,1,1,2,1,0, clock=1,18,12,3,1,0,2,1,0, co=7980,9152,219
cpu=62.95%, timer=41,0,0, io=51,46,0,0,10022,257, epoll=256,16,16,19, sched=231,20,1,1,1,1,2,0,0, clock=1,16,10,5,2,1,2,0,0, co=7980,9944,256
cpu=46.91%, timer=41,0,0, io=51,46,0,0,10022,257, epoll=256,16,16,19, sched=231,20,1,1,1,1,2,0,0, clock=1,16,10,5,2,1,2,0,0, co=7980,9944,256

TRANS_BY_GPT3

@winlinvip
Copy link
Member Author

winlinvip commented Feb 11, 2021

The following is the error situation of the 20ms timer in the system when there are 3000 RTMP live streams.

Before optimization, there was a significant error, with only 46 timer events per second, and only 35 timers within 25ms.

cpu=58.96%, timer=46, io=8444,7992,0,0,174,0, epoll=237,18,18,28, sched=209,22,1,1,1,1,1,0,0, clock=0,27,10,1,1,1,2,0,0, co=12014,9622,237,0
cpu=63.42%, timer=46, io=8444,7992,0,0,174,0, epoll=237,18,18,28, sched=209,22,1,1,1,1,1,0,0, clock=0,27,10,1,1,1,2,0,0, co=12014,9622,237,0
cpu=57.48%, timer=46, io=8948,8368,0,0,0,0, epoll=275,19,19,27, sched=248,21,1,1,1,1,1,0,0, clock=0,27,10,1,1,1,1,0,0, co=12014,10040,275,0
cpu=60.19%, timer=46, io=8948,8368,0,0,0,0, epoll=275,19,19,27, sched=248,21,1,1,1,1,1,0,0, clock=0,27,10,1,1,1,1,0,0, co=12014,10040,275,0
cpu=60.96%, timer=45, io=8104,7612,0,0,0,0, epoll=221,16,16,26, sched=193,22,1,1,1,1,1,1,0, clock=0,25,10,2,1,1,2,0,0, co=12014,9236,221,0

After optimization, there are 50 timer events per second, and almost all of them are within 25ms.

cpu=69.95%, timer=52, io=9799,8347,0,0,2,0, epoll=353,47,47,42, sched=9918,8,0,0,0,0,0,0,0, clock=0,45,3,1,1,0,0,0,0, co=12014,19632,353,9573
cpu=69.64%, timer=52, io=9799,8347,0,0,2,0, epoll=353,47,47,42, sched=9918,8,0,0,0,0,0,0,0, clock=0,45,3,1,1,0,0,0,0, co=12014,19632,353,9573
cpu=61.98%, timer=51, io=9084,7551,0,0,3,0, epoll=239,41,41,51, sched=9207,13,1,0,0,0,0,0,0, clock=0,45,3,1,1,0,0,0,0, co=12014,18135,239,8981
cpu=63.31%, timer=51, io=9084,7551,0,0,3,0, epoll=239,41,41,51, sched=9207,13,1,0,0,0,0,0,0, clock=0,45,3,1,1,0,0,0,0, co=12014,18135,239,8981
cpu=60.31%, timer=52, io=8896,7403,0,0,227,0, epoll=256,42,42,57, sched=8745,15,1,0,0,0,0,0,0, clock=0,44,3,1,1,1,0,0,0, co=12014,17599,256,8505

Note: For RTMP, yield is triggered only after receiving 10 packets, which effectively reduces the frequency of yield.

In order to improve the performance of the timer, high-latency coroutines actively yield. For example, if there are 10,000 iovecs to be sent in 10 batches, each batch will yield after sending 1,000 iovecs. This way, during the execution of each coroutine, the timer will be checked, ensuring that the timer does not starve. Since yield is used, it is an efficient way to relinquish the time slice, allowing the business layer to determine which coroutines have high IO latency.

TRANS_BY_GPT3

@winlinvip
Copy link
Member Author

winlinvip commented Feb 11, 2021

Under RTC streaming, there is a slight difference. Instead of 50 timer events per second, there are more due to the use of multiple timers in the current RTC setup. However, it will be optimized to use only one timer in the future.

So, before optimization, we can observe the distribution of timers. Out of 28 events, only 7 occur within 25ms:

cpu=94.63%, timer=28057,10509,17516, udp=57479,662,7049,0, epoll=662,42,42,0, sched=635,5,1,7,3,8,1,0,0, clock=0,5,1,1,3,14,4,0,0, co=1814,35729,662,0
cpu=97.05%, timer=28057,10509,17516, udp=57479,662,7049,0, epoll=662,42,42,0, sched=635,5,1,7,3,8,1,0,0, clock=0,5,1,1,3,14,4,0,0, co=1814,35729,662,0
cpu=94.99%, timer=26046,9755,16260, udp=60841,490,6514,0, epoll=490,23,23,0, sched=464,4,1,4,7,5,3,1,0, clock=0,2,1,1,1,11,9,1,0, co=1814,33043,490,0
cpu=94.77%, timer=26046,9755,16260, udp=60841,490,6514,0, epoll=490,23,23,0, sched=464,4,1,4,7,5,3,1,0, clock=0,2,1,1,1,11,9,1,0, co=1814,33043,490,0
cpu=92.74%, timer=29398,11012,18352, udp=59795,728,7344,0, epoll=728,47,47,1, sched=699,10,1,6,5,4,1,0,0, clock=0,6,1,2,5,9,5,0,0, co=1814,37467,728,0
cpu=92.80%, timer=29398,11012,18352, udp=59795,728,7344,0, epoll=728,47,47,1, sched=699,10,1,6,5,4,1,0,0, clock=0,6,1,2,5,9,5,0,0, co=1814,37467,728,0

After optimization, there are 50 timer events per second, and almost all of them occur within 25ms:

cpu=89.07%, timer=47873,17932,29887, udp=60041,1082,11955,0, epoll=1082,241,241,1, sched=6541,0,0,0,0,0,0,0,0, clock=0,45,4,0,0,0,0,0,0, co=1814,61066,1082, yield=5458,153
cpu=88.67%, timer=47873,17932,29887, udp=60041,1082,11955,0, epoll=1082,241,241,1, sched=6541,0,0,0,0,0,0,0,0, clock=0,45,4,0,0,0,0,0,0, co=1814,61066,1082, yield=5458,153
cpu=89.37%, timer=47889,17937,29899, udp=57825,1282,11958,0, epoll=1282,187,187,1, sched=6539,0,0,0,0,0,0,0,0, clock=0,42,7,0,0,0,0,0,0, co=1814,61209,1282, yield=5256,74
cpu=91.97%, timer=47889,17937,29899, udp=57825,1282,11958,0, epoll=1282,187,187,1, sched=6539,0,0,0,0,0,0,0,0, clock=0,42,7,0,0,0,0,0,0, co=1814,61209,1282, yield=5256,74
cpu=86.00%, timer=47824,17914,29856, udp=57919,1234,11941,0, epoll=1235,179,179,1, sched=6500,0,0,0,0,0,0,0,0, clock=0,43,6,0,0,0,0,0,0, co=1814,61077,1235, yield=5265,76
cpu=89.34%, timer=47824,17914,29856, udp=57919,1234,11941,0, epoll=1235,179,179,1, sched=6500,0,0,0,0,0,0,0,0, clock=0,43,6,0,0,0,0,0,0, co=1814,61077,1235, yield=5265,76

Note: Since yield needs to be effective in multiple coroutines, we modified it to first check the timer. If there is a triggered timer, RunQ will definitely have multiple coroutines, and then perform a switch. If the timer has not expired, no yield will be performed.

TRANS_BY_GPT3

@winlinvip
Copy link
Member Author

winlinvip commented Feb 11, 2021

When playing RTC, yielding is mainly required when sending UDP packets.

Before optimization, there were only 37 timer events per second, and only 25 of them were within 25ms.

cpu=60.47%, timer=24338,24,24270, udp=395,168,58152,0, epoll=191,21,21,23, sched=208,14,1,1,1,1,1,0,0, clock=0,18,7,8,2,2,1,0,0, co=2417,68609,191,35,32
cpu=86.60%, timer=24338,24,24270, udp=395,168,58152,0, epoll=191,21,21,23, sched=208,14,1,1,1,1,1,0,0, clock=0,18,7,8,2,2,1,0,0, co=2417,68609,191,35,32
cpu=82.62%, timer=21875,21,21814, udp=411,114,66943,0, epoll=130,14,14,15, sched=140,15,3,3,2,1,1,0,0, clock=0,11,6,7,4,2,4,0,0, co=2417,62304,130,37,34
cpu=92.07%, timer=21875,21,21814, udp=411,114,66943,0, epoll=130,14,14,15, sched=140,15,3,3,2,1,1,0,0, clock=0,11,6,7,4,2,4,0,0, co=2417,62304,130,37,34
cpu=73.66%, timer=23421,23,23356, udp=400,121,60902,0, epoll=146,22,22,24, sched=161,11,2,2,1,1,1,0,0, clock=0,19,6,5,3,1,3,0,0, co=2417,62140,146,36,33

Note: Due to the stress testing of RTC playback, a RTC push stream was used, so some timers have already been triggered by the yield of the push stream, so it is slightly better than before optimization.

After optimization, there are approximately 50 timer events per second, with most of them occurring within 25ms.

cpu=78.28%, timer=29733,29,29651, udp=407,151,64834,0, epoll=237,88,88,85, sched=3361,0,0,0,0,0,0,0,0, clock=0,46,3,0,0,0,0,0,0, co=2417,75297,237, yield=3124,3114
cpu=72.98%, timer=29733,29,29651, udp=407,151,64834,0, epoll=237,88,88,85, sched=3361,0,0,0,0,0,0,0,0, clock=0,46,3,0,0,0,0,0,0, co=2417,75297,237, yield=3124,3114
cpu=77.00%, timer=29731,29,29649, udp=400,170,60098,0, epoll=252,82,82,81, sched=3150,1,0,0,0,0,0,0,0, clock=0,47,1,1,0,0,0,0,0, co=2417,76297,252, yield=2898,2886
cpu=69.64%, timer=29731,29,29649, udp=400,170,60098,0, epoll=252,82,82,81, sched=3150,1,0,0,0,0,0,0,0, clock=0,47,1,1,0,0,0,0,0, co=2417,76297,252, yield=2898,2886
cpu=79.28%, timer=29738,29,29656, udp=401,173,61549,0, epoll=249,79,79,75, sched=3216,0,0,0,0,0,0,0,0, clock=0,47,1,0,0,0,0,0,0, co=2417,78310,249, yield=2967,2956

TRANS_BY_GPT3

@winlinvip
Copy link
Member Author

winlinvip commented Feb 11, 2021

Currently, most of the timers have been optimized, but there are still a few remaining issues that need to be resolved.

  • There are too many timers for RTC, they need to be optimized into global timers that do not increase with the number of clients, as it affects performance.
  • Disk I/O still causes high latency for single executions. For example, a disk read or write may take 10ms, so no matter how immediate our yield is, it will still result in significant errors. This problem needs to be resolved by using a separate thread.

TRANS_BY_GPT3

@winlinvip
Copy link
Member Author

winlinvip commented Aug 28, 2021

@winlinvip winlinvip self-assigned this Aug 28, 2021
@winlinvip winlinvip added this to the 3.0 milestone Sep 4, 2021
@winlinvip winlinvip changed the title ST:依靠EAGAIN调度,导致RTC场景下Timer误差较大 ST: Reliance on EAGAIN scheduling results in significant timer error in RTC scenarios. Jul 28, 2023
@winlinvip winlinvip added the TransByAI Translated by AI/GPT. label Jul 28, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Discussion Discussion or questions. Enhancement Improvement or enhancement. TransByAI Translated by AI/GPT. WebRTC WebRTC, RTC2RTMP or RTMP2RTC.
Projects
None yet
Development

No branches or pull requests

1 participant