Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

The FLARE extent is temporarily read-only. #845

Closed
Suchiman opened this issue Apr 24, 2019 · 7 comments
Closed

The FLARE extent is temporarily read-only. #845

Suchiman opened this issue Apr 24, 2019 · 7 comments

Comments

@Suchiman
Copy link

I have quite a large number (92 since 2019-04-20 03:23:26 to now 2019-04-24 10:48:00) of These exceptions in my Ingestion log, is this to be expected?

2019-04-24 07:00:06 Exception raised when writing to event storage
Flare.FlareCapacityException: The FLARE extent 2019-04-24T00:00:00.0000000Z is temporarily read-only.
   at Flare.Storage.Native.FlareStorageExtent.AcquireWriteLock()
   at Flare.Storage.Native.FlareStorageExtent.Add(StructuredEvent[] events, Boolean lazyFlush)
   at Flare.Events.EventStore.Add(StorageEventCreationData[] eventsCreationData, Boolean lazyFlush)
   at Flare.Queries.DataStore.Add(StorageEventCreationData[] eventsCreationData, Boolean lazyFlush)
   at Seq.Server.Web.Api.RawEventsController.Ingest()
@nblumhardt
Copy link
Member

Hmm that one is our circuit breaker that fires when disk IO is saturated (generally long-running from-disk queries, but could be due to writes in unusual circumstances). It does sound as though you might be below the RAM threshold needed for Seq to work efficiently (post-5.1-upgrade diagnostics should tell us something about that).

@Suchiman
Copy link
Author

Suchiman commented Apr 24, 2019

Could this be caused by retention policies? I have 3 retention policies in total but one is set to cleanup after just 1h which had the goal to kill junk that is at best short term interesting to keep more useful stuff in memory.
post-5.1-upgrade i'm not sure i understand that precisely, you mean diagnostics as of the currently running 5.1.3000 seq instance or upcomming diagnostics?

@nblumhardt
Copy link
Member

A 1 hr retention policy can sometimes have a substantial cost, so I wouldn't rule it out 👍

RE diagnostics, I meant of the currently-running Seq instance, sorry about the vague wording.

@nblumhardt
Copy link
Member

Hi Robin; with some time now passed, and #837 completed, are you still experiencing any issues with this server?

Best regards,
Nick

@Suchiman
Copy link
Author

Hi Nick,

this message occurs nontheless in the diagnostic logs and "on disk" search is still unreasonably slow, the 32GB memory just avoids hitting the disk that often which makes this feel less like an issue.

@nblumhardt
Copy link
Member

Thanks for the follow-up @Suchiman ... the slow disk access is still not ideal 🙁 ... I'll run back through our earlier email thread and let you know if I have any new ideas.

@nblumhardt
Copy link
Member

Closing the loop after our recent email thread: it appears that the cache.systemRamTarget value was too high for this server, and reducing it to 0.85 fixed the issue. Thanks for sticking with this one, Robin - if any problems reappear please feel free to jump back in, here :-)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

2 participants