-
Notifications
You must be signed in to change notification settings - Fork 176
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Too many files causes the logging and the shell scripts to crash #58
Comments
@nishantvas ok, let me reproduce that and think what will be the best solution. Thanks |
Great, let me know if I can provide more details. This way, I keep the history but loose the individual run details. |
@nishantvas Can you attach the full log from the container, please? |
Not checking results automatically
|
@nishantvas I think I have an idea. Working on this. |
@nishantvas I've released a beta version recently. In there I did some changes to support more files. Please, can you check your case with multiple files using this version
Please, let me know if this version can resolve your problem or not. If it fails, attach a log, please. |
Sure, I'll deploy this and let you know If I can reproduce the issue. The way our tests are run, you can be sure that it can increase 100K files in less than couple of days. |
@nishantvas I don’t know what is the limit but now I’m not finding the files to delete. Now I’m storing the history folder in a temporal directory and I’m deleting everything when you clean the results. If you can tell me what is the size of the history directory in allure-results when you have 100k results it would be better. The same check if there is a performance problem when you clean the results. |
@fescobar, this does work for me and setting the env var helps a bit, but creating that many files will take some iterations to run Can I suggest some changes in the code ? |
@nishantvas of course, can you create a pull request from |
@nishantvas Maybe you have to use
https://github.com/fescobar/allure-docker-service#updating-seconds-to-check-allure-results |
Couple of code smells and better python practices, but it depends if you are intending to stream line the python code present in the app.py or if you've left it because it's essentially a wrapper on the actual allure commands being called via .sh files I can create a PR by "productionalizing" the code if you say so. Apart from that, one thing which'll matter, in Since it can have over 200K files in my case, this will take too much memory and too much time And in |
For now, I use it as NONE, since I can't have the service running the generate report on it's own since it's a very expensive task with these many files. |
There is an issue with the build.. ALLURE_VERSION: 2.13.1
|
@nishantvas Ok, let me resolve that. After that, I will let you do some changes to improve that. |
@nishantvas How can I reproduce this? Also verify not running with root user. Use user by default or |
I've re-released
|
I have this deployed on a kubernetes cluster with a mounted volume, but if you have changed the default user of the container. It is entirely possible that the files and folders previously created with the root user of 2.13.0 version can not be overridden. I've tried running this in another cluster and locally, and it works.. I'll fiddle with the service and see how I can purge the volumes (they're a bit messy to get into) |
@nishantvas yes, I can reproduce it. That's the problem. I will see what I can do to avoid breaking some volume. Steps to reproduce:
Result:
Working in the fix. |
@nishantvas the problem wasn't related to permissions, it was related to concurrency. Just you needed it to execute I've implemented another fix to delete all files. I think this is better. I've re-released Can you try with this new version |
Thanks so much, this does seem to be working now. |
@nishantvas I will run some tests and I will deploy it in a few mins. I will let you know. |
#58 - Fix - Too many files causes the logging and the shell scripts to crash
@nishantvas I've redeployed version |
When there are too many files (Over 50K, generated by 3000 tests in 5 - 10 runs) in the allure-results folder, the clean-results and clean-history APIs fail
I think it's unwise to expect even linux to run through these many files, but is there any way this can be sorted ?
Is it possible that the files can be batched into multiple folders?
Secondly, as the number of files increase in the folder, the response logging with list of file names becomes overbearing.
The web services have to transfer too much data to return a list of 10,000 plus filenames as response.
If this needs to be returned, there should be an env variable controlling the verbose response.
Will it make sense to only reply the count of the files present ? Which then falls back to the issue mentioned above listing so many files almost crashes the python process with memory overflow.
The text was updated successfully, but these errors were encountered: