-
Notifications
You must be signed in to change notification settings - Fork 522
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Large append-only queue locks up machine #330
Comments
This is supposed to be how it works currently.
On Linux at least I don't know a way to cause this behaviour if we wanted
to.
Admittedly this is not a test we run regularly.
Note: when you actively writing to the queue, you should be a slow down
when you have 10% of main memory dirty, the virtual size shouldn't matter.
Can you give me an idea of the typical size of your messages and how you
have configured the queue?
ᐧ
…On 23 January 2017 at 13:23, mcs6502 ***@***.***> wrote:
I have a test process that *sequentially* appends zero-filled blocks to a
Chronicle Queue. Running several such processes simultaneously (each with
its own queue file) freezes the entire machine when the total size of all
queues gets close to the size of physical RAM. This was observed on 64-bit
Linux with Chronicle Queue 4.5.1. Diagnostic output suggests that the
machine begins paging out intensively when this happens. The RSS of each
test process equals its SHM size, which equals the size of the queue.
Please could you see if you have seen anything similar and suggest a
workaround so the process can create a queue larger than physical RAM.
—
You are receiving this because you are subscribed to this thread.
Reply to this email directly, view it on GitHub
<#330>, or mute the
thread
<https://github.com/notifications/unsubscribe-auth/ABBU8QCLPNQP_wU0EwiPqnqE7EoNw1-rks5rVKnOgaJpZM4LrCK7>
.
|
The queue is configured as a SingleChronicleQueue with WireType.FIELDLESS_BINARY and SystemTimeProvider.INSTANCE, and the message size is 512 bytes. I think the duration of lock-ups is directly proportional to the size of the queue at the time when a lock-up is triggered. |
To clarify,
- you should be able to write as much as your free disk space regardless of
memory.
- your writer will slow down when 10% of main memory is dirty.
- you don't have to do anything unless you want to increase this default
kernel setting.
ᐧ
…On 23 January 2017 at 15:42, mcs6502 ***@***.***> wrote:
The queue is configured as a SingleChronicleQueue with
WireType.FIELDLESS_BINARY and SystemTimeProvider.INSTANCE, and the message
size is 512 bytes. I think the duration of lock-ups is directly
proportional to the size of the queue at the time when a lock-up is
triggered.
Please could you clarify in your comment above whether by "works
currently" you meant the lock-ups are supposed to occur, or the queue is
expected to be able to exceed the RAM size? And also the other comment
about the slow down--whether I should be expecting the system to slow down
once it reaches 10% dirty pages, or whether I should slow the writers down
so that the system can flush its buffers?
—
You are receiving this because you commented.
Reply to this email directly, view it on GitHub
<#330 (comment)>,
or mute the thread
<https://github.com/notifications/unsubscribe-auth/ABBU8bLGP7Jgg-uDgS6e29GWQ7mnRA_pks5rVMpXgaJpZM4LrCK7>
.
|
Assuming this question is solved now, I'll close this issue. |
I have a test process that sequentially appends zero-filled blocks to a Chronicle Queue. Running several such processes simultaneously (each with its own queue file) freezes the entire machine when the total size of all queues gets close to the size of physical RAM. This was observed on 64-bit Linux with Chronicle Queue 4.5.1. Diagnostic output suggests that the machine begins paging out intensively when this happens. The RSS of each test process equals its SHM size, which equals the size of the queue.
Please could you see if you have seen anything similar and suggest a workaround so the process can create a queue larger than physical RAM.
The text was updated successfully, but these errors were encountered: