New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Bug]: Synology error seccomp unavailable when starting Elasticsearch #358
Comments
What synology device is that? I think the crucial part of the error is
You can find that error a few places over the internet, for example here. Usually they suggest to deactivate that security feature, at your own risk of course, by setting the environment variable to the ES container:
I have never seen that error, which suggests that this is a synology specific issue, maybe an older model with older software running? |
The error Currently reviewing logs to see what is different between the two environments. |
The fatal error is coming from another area:
Looking into my runtime, I do see the call for lucene, but they are succeeding by calling to a previous version. Looking at the logs for both, however, I see that both are making the successful call:
Stable ES
The only real difference I can see is the currently running version of ES: Questions:
|
It's the FS1018 running DSM version DSM 7.1.1-42962 Update 1
I tried adding that as an environment variable in the ES container and nothing changed.
I did have a working configuration, but since I have watchtower and it automatically updates other containers, I couldn't tell you when ES was last working since the containers have to be recreated for the images to be updated. Before I realized there was a problem with the ES container, there was another issue with the redis container shown here #354 So I removed the RDB file in the mounted folder and it began working again.
I recreated the container with suggested
The nodes file in the ES mounted folder says |
OK, so this originally broke in a static configuration of Let me take a look closer at the container to see if there might be something that we can fix/replace to resolve the issue. |
Alright, found out that it looks like a corrupted index is causing the problem. Looking into another issue that was similar, I was able to look closer at your error output. This segment of the error output gives us the required details about which index is causing the problems:
This gives us a location of Let us know if that works out for you and if it changes the output of the logs. |
ES doesn't like downgrading, but that image seems to indicate progress. Looking forward to the results of the upgrade to 8.5.X to see if this issue is now resolved overall. |
ES 8.5.0 introduced breaking changes and a lot of other changes too, I haven't checked the release notes yet. Best to stick with the version in the docker-compose file. Starting to change things in the ES filesystem, you'll run into danger to make things worse. maybe it's best to restore from backup? But yes, downgrading is not supported by ES... |
|
Expose should be used when services are going to be on the same network. Since these are all part of the network (because they are being started by the same The problem that you are having is that you are attempting to connect to ES via the host IP. Change out the ES_URL from |
@N72826 have you been able to confirm that this configuration update allows the system to work as expected? If there are any additional problems, let us know and we can review and try to help with troubleshooting the cause. |
yeah that solved the connection issue although it worked the way I had it configured previously for some reason. Like I said before, I think the primary issue I had was redis being incapable of reading the RDB file like in #354 . So I screwed myself over by setting the ES image to the latest. And after updating ES didn't fix it, I then found that other issue describing the fix for redis. But the damage was already done and deleting the RDB file only fixed the redis container so now I was left with an incompatible version of ES. Thank you guys for helping me figure that out. Appreciate those who contribute to this project because it made my life easier. The only thing I lost was my tubearchivist settings, I had it set to auto rescan subscriptions every 30 minutes and start downloads every hour. Scheduling the start downloads was easy but I remember having trouble with scheduling the rescan. I know the cron scheduling is non standard and by design prevents you from adding the wildcard (*) in the minutes value, but somehow I was able to schedule the rescan to occur every 30 minutes. |
Every thirty minutes should be |
Yeah, the wildcard doesn't work. And I tried the comma separated values but that also didn't work. So I'm not sure what I had it set to before but I accomplished it somehow and never got blocked by youtube. This isn't even really an issue, more of a preference. The original issue I created this thread for has been solved just thought I might ask while I am here. |
Yeah, that part of the wiki is outdated. After we had a few people coming here surprised that they got blocked by YouTube, I thought it best to limit the frequency to maximal once per hour. I'll fix the wording... But glad you figured things out here. |
closing this for now, better wording will be in the next release. |
Latest and Greatest
Operating System
Synology
Your Bug Report
Describe the bug
Hello, I want to thank you for creating this project. I already solved the redis issue. Archivist-es is continuously restarting. I will provide a log from synology docker here in a second. I primarily use portainer to manage the containers but since elasticsearch keeps bootlooping, the container log page won't even load on portainer.
Steps To Reproduce
attempting to start the containers
Expected behavior
archivist-es should run and continuously stay active instead of restarting itself every few seconds and occupying memory usage.
archivist-es.csv
Relevant log output
Anything else?
I attached my log output above since that's the only place where I could and because it only allowed me to export as a formatted csv file from docker. Apologies and hopefully the solution is easily apparent.
The text was updated successfully, but these errors were encountered: