Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Why logs errors is accumilating RAM , #6339

Open
brixxnatt opened this issue May 19, 2024 · 5 comments
Open

Why logs errors is accumilating RAM , #6339

brixxnatt opened this issue May 19, 2024 · 5 comments
Assignees

Comments

@brixxnatt
Copy link

Why logs errors is accumilating RAM ,

@brixxnatt
Copy link
Author

each error is adding 100MB to RAM , accumilating

@burak-58
Copy link
Contributor

Hi @brixxnatt,
Thank you for reporting. Can you give me some more info? Which logs causes RAM usage? Please provide me a step by step reproduce scenario.

@brixxnatt
Copy link
Author

if for example 2 people send stream to the same RTMP , in case of error , it happen when 2 people send rtmp stream at the same time to the same rtmp , 1 is connected and the other is trying always to connect , i know this should not happen then 2 people send rtmp to the same place at the same time , but in anyway it should give error , not acccumilating the RAM with the error untill server crash full of RAM .

@lastpeony
Copy link
Contributor

@brixxnatt
Hello, Do you observe increase on JVM Heap Memory on web panel dashboard which leads to crash, or only system memory?

@lastpeony
Copy link
Contributor

lastpeony commented May 31, 2024

if for example 2 people send stream to the same RTMP , in case of error , it happen when 2 people send rtmp stream at the same time to the same rtmp , 1 is connected and the other is trying always to connect , i know this should not happen then 2 people send rtmp to the same place at the same time , but in anyway it should give error , not acccumilating the RAM with the error untill server crash full of RAM .

Hello,
I am trying to reproduce this problem on a clean cloud instance but i could not manage to reproduce. I am using ffmpeg and publishing a mp4 file repeatedly to the same streamId with RTMP and getting
I/O error from ffmpeg.
I do not observe 100 MB ram increase.
This is how i publish:

import subprocess
import time

def run_ffmpeg_stream(command):
    process = subprocess.Popen(command, shell=True)
    return process

def stop_ffmpeg_stream(process):
    # Terminate the FFmpeg process
    process.terminate()
    process.wait()

def main():
    ffmpeg_command = "ffmpeg -re -i mysync.mp4 -codec copy -f flv rtmp://myIp/LiveApp/teststream"

    for i in range(1000):
        print(f"Starting stream {i + 1}/10")
        
        process = run_ffmpeg_stream(ffmpeg_command)
        print("Streaming started")

        time.sleep(5)

        stop_ffmpeg_stream(process)
        print("Streaming stopped")

        time.sleep(1)

    print("Completed 1000 streaming cycles")

if __name__ == "__main__":
    main()

Can you please share your exact instance specs and how you exactly publish? If you have time we can also schedule a meeting and test it together.

I need to reproduce this on clean AMS installation.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
Status: 🔖 Sprint
Development

No branches or pull requests

3 participants