For each modlog cases that are created, it's sent to the file or so that contains all the modlog information. So, while doing the [p]casesfor command. When you have a large (currently our modlog case is around 68k, almost 69k) number of cases every time you do the casesfor it takes longer compared to someone who wouldn't have that large amount of cases. As we continue to grow with the number of cases, expecting the command process to take longer.
What were you expecting to happen?
Essentially, possibly adjustments of modlog cases where it clusters per ID or something that'll improve the processing time.
What actually happened?
Since we have around 68k cases going onto 69k, every time we use the casesfor command, it'll take a bit to process since it needs to loop through the entire file for the specific ID that you're searching for.
How can we reproduce this issue?
Gain a large number of cases for your bot and test the casesfor command to see the speed of the process.
The text was updated successfully, but these errors were encountered:
Trying to identify the actual cause with certainty as there are a couple potential candidates for the perceived slowness, and the size of the data seems unlikely to cause a noticeable slowdown here.
Which storage backend is in use?
What is the actual speed of a response here?
Does this occur only with multiple commands used in succession, or does this also happen if it is the only command to have been run within a minute?
Other bugs
What were you trying to do?
For each modlog cases that are created, it's sent to the file or so that contains all the modlog information. So, while doing the [p]casesfor command. When you have a large (currently our modlog case is around 68k, almost 69k) number of cases every time you do the casesfor it takes longer compared to someone who wouldn't have that large amount of cases. As we continue to grow with the number of cases, expecting the command process to take longer.
What were you expecting to happen?
Essentially, possibly adjustments of modlog cases where it clusters per ID or something that'll improve the processing time.
What actually happened?
Since we have around 68k cases going onto 69k, every time we use the casesfor command, it'll take a bit to process since it needs to loop through the entire file for the specific ID that you're searching for.
How can we reproduce this issue?
Gain a large number of cases for your bot and test the casesfor command to see the speed of the process.
The text was updated successfully, but these errors were encountered: