Docker stress test with Synology DS220+ (v112.1) #1808
Labels
for : ported-from-airsonic
Known issue resolved after airsonic is closed
in: data-scan
Issues especially related to scan.
in: docker
Issues in the test module.
in: test
Issues in the test module or test package
type: investigation
Investigation required. If it's not a bug it will be closed.
Milestone
Docker considerations. Related airsonic/airsonic#1473, #1747
If we can't reproduce Problems with Airsonic, finding out the cause is proof of the devil. Therefore, we will conduct an equivalent stress test and if there are no problems, it will be closed.
Overview
This may not always be a probrem with an immediate answer. However, it can be dealt with through continuous observation and improvement. Since the fature of memory measurement by logging has been added in Jpsonic, it is easier to measure and track changes for each process than before. These considerations and cumulative improvements are now a little easier than they used to be.
Goal
If it is a configuration that has been confirmed to work based on Synology DS220+, it is not difficult to operate it in a general Linux environment. Only the work that we always do, such as rewriting directories, occurs.
Non-Goal
Dummy data
10 songs with random titles per album.
10 such albums per artist.
100 such artists in your music folder. i.e. 10,000 songs per music folder.
Make a lot of such music folders.
When using 100,000 songs.
Procedure
Do the following steps 5 times without restarting Jpsonic. Simply put, repeat all registration and all deletion 5 times. The scan will be done 15 times.
Result
With song part size of 0
In a test using dummy data with song part size of 0, memory overflow was not confirmed with the following settings.
In the case of data tagged as Well-formed, as in this verification, scanning of 100,000 songs is completed in 5 minutes. 1 minute for the second scan. However, efficiency decreases somewhat as the number of songs increases.
The above parameters are not meant to be optimal values. It's a value that does not cause memory overflow. Therefore, it will take some time before we officially announce the so-called "recommended value" that users want. A fix that is expected to improve is coming, and verification will continue to be performed cyclically in the future.
However, It is assumed that the recommended value for the library of 100,000 songs, which is the main target, will probably not change significantly in the future. Because of nice round number!
If the song part size is not 0
Create 100,000 dummy data by writing tags to 34.7 MB FLAC songs in the same way.
ah. It will take about 5 hours to create the data. And the 4TB hard disk purchased for testing screamed. (Because the OS area of NAS is also included)
Taking this situation into consideration, the “procedure” is repeated 5 times each with dummy data with song parts and data without song parts.
Good harvest.
Summary
Original purpose achieved
Issue extraction
Arbitrary verification procedures were instituted for this. There are three main suggestions for improvement.
These are in order of priority. (Resolving i first allows us to erase the large dummy data... we can add data with music part size of 0 to the empty space .... )
We shouldn't be parallelizing scans until these are resolved. It just jumps up resource consumption pointlessly. If anything, it is better not to parallelize in the case of SD220+. To avoid battling with the GC the required memory requirement will jump.
However, the verifications that should be done in v112.1.0 have been completed, so this matter is closed. The topic of speed improvement will be after v112.2.0.
The text was updated successfully, but these errors were encountered: