Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Elastic Search Indexing failing #35

Closed
seantomlins opened this issue Apr 15, 2020 · 8 comments
Closed

Elastic Search Indexing failing #35

seantomlins opened this issue Apr 15, 2020 · 8 comments

Comments

@seantomlins
Copy link

Hey. We deployed this as an app service on a shared app service plan (S2 I think)

It started to fail after a couple weeks with this issue
https://community.sonarsource.com/t/analysis-failed-with-unrecoverable-indexation-failures/12329/2

The log suggests that we've run out of space but the disk space on the service plan is 50GB with 47GB free

2020.04.15 11:09:51 INFO es[][o.e.e.NodeEnvironment] using [1] data paths, mounts [[Windows (D:)]], net usable_space [1.2gb], net total_space [31.6gb], types [NTFS]

Any idea why I'm running into this?

@vanderby
Copy link
Owner

I have not run into this before, but I don't run on a shared plan (D1). I've had issues with free and shared tiers which is why I try to push people at at least the B1 plan (B2 and S2 are a nice bump for the second core).

@seantomlins
Copy link
Author

We have ended up redeploying out to a B1 plan

Came across this issue on the elastic search elastic/elasticsearch#53233

I'm convinced it's the shared plan causing this issue.

If anyone comes across this issue I hope this helps :)

Thanks @vanderby for the quick response

@richneptune
Copy link

I'm on a B2 and have started to run into this in the past week (our SQ instance has been up for nearly a year).

There's definitely something funny with how webapps have their disk space calculated.

If I go into CLI and run "dir" in wwwroot, it says "9,665,695,744 bytes free" - so most of my 10GB unused.

If I "df", I get the following bizarre drive information:

D:\home\site\wwwroot>df
Filesystem           1K-blocks     Used Available Use% Mounted on
C:                    47184828 25922612  21262216  55% /c
D:/Program Files/Git  33193980 31572744   1621236  96% /

So it's telling me there's roughly 1.5GB free.

I've tried moving temp to d:\local\temp, and data and logs to other directories outside of wwwroot, but I'm still getting an error that the ES has violated the flood point and has gone read only.

My es6 directory is tiny (70MB perhaps) so if I could disable the check in the elasticsearch.yml that would be awesome - but sonarqube always writes a new one at startup and omits my changes to the flood parameter.

If you guys have any suggestions that would be awesome.

@saibaskaran57
Copy link

Hello there,

I'm facing this issue as well on Azure App Service S2 instance.

From the logs, we can see that we have enough space for the operation to be made:

2020.04.16 10:49:26 INFO es[][o.e.e.NodeEnvironment] using [1] data paths, mounts [[Windows (D:)]], net usable_space [1.2gb], net total_space [31.6gb], types [NTFS]

SonarQube seems does not adhere to elasticsearch.yml file which can disable the threshold.

Any ideas on this?

@saibaskaran57
Copy link

Hello @richneptune @seantomlins @vanderby ,

I've managed to solve my issue in a different way. Scaled up Azure App Service to P1V2 (contains higher disk space) to perform ES re-indexing and scale down to back to Azure App Service S1.

For some reasons, the disk size calculation seems incorrect. All other techniques to change elasticsearch.yml did not work for me.

Hope this can help on your side.

Thanks.

@RobinDink
Copy link

Hey @saibaskaran57,

thank you for your suggestion, this fixed our problem!
You rock!

Cheers,
Robin

@vanderby
Copy link
Owner

Thanks for the collaborative effort in solving this. And thanks for actually finding the project useful!

@vanderby
Copy link
Owner

vanderby commented May 6, 2020

Cross posting this from #40

I figured out a way to disable the disk check. Changing this setting did persist across App Service restarts.

  1. Enable ES HTTP Port by setting sonar.search.httpPort in the app settings.

    Notes:

    1. I set the port to 9200 since that is the ES default.
    2. SQ will log a warning not to enable this for production. But since the port is firewalled from the internet we should be safe here.
    3. The App Service should restart on its own after updating the app setting, but if not manually restart it.
  2. Open the Kudu PowerShell debug console. Run the two commands below.
    $ProgressPreference = "SilentlyContinue" # This disables the progress meter from accessing the console which is not allowed when executing on an App Service
    Invoke-WebRequest -Uri http://localhost:9200/_cluster/settings -ContentType 'application/json' -Method PUT -Body '{ "persistent": {"cluster.routing.allocation.disk.threshold_enabled":false }}' -UseBasicParsing
    
  3. Verify by the various means:
    1. ES.log should say
      INFO  es[][o.e.c.s.ClusterSettings] updating [cluster.routing.allocation.disk.threshold_enabled] from [true] to [false]
      
    2. Query ES Http API
      Invoke-WebRequest -Uri http://localhost:9200/_cluster/settings -Method GET -UseBasicParsing
      StatusCode        : 200
      StatusDescription : OK
      Content           : {"persistent":{"cluster":{"routing":{"allocation":{"disk":{ "threshold_enabled":"false"}}}}},"transient":{}}
      ...
      

Lastly, ES does log this warning about http.enabled being deprecated. Once SQ updates to a newer version of ES these steps may need to be reworked. From ES.log:

WARN  es[][o.e.d.c.s.Settings] [http.enabled] setting was deprecated in Elasticsearch and will be removed in a future release! See the breaking changes documentation for the next major version.

Reference For The Future:

@vanderby vanderby pinned this issue May 6, 2020
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

5 participants