Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

UNDERTOW-1794 DefaultAccessLogReceiver violates Closeable contract #1041

Open
wants to merge 1 commit into
base: master
Choose a base branch
from

Conversation

baranowb
Copy link
Contributor

@baranowb baranowb commented Feb 16, 2021

if (stateUpdater.compareAndSet(this, 3, 1)) {
logWriteExecutor.execute(this);
for (int i = 0; i < 1000 && !this.closed && !pendingMessages.isEmpty(); ++i) {
final String msg = pendingMessages.poll();
Copy link
Contributor Author

@baranowb baranowb Feb 16, 2021

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It might be better to use peek()+remove() here. Though Id assume follow up might be better than to alter PRs ad hoc.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@baranowb because this class has concurrent access, poll is better than peek() + remove(). Between the moment we invoke peek() and the moment we invoke remove, we risk having another thread executing flushAndTerminate. This is why we need to run poll() here.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There is trade off. It is possible that XNIO thread will be terminated between poll() and write(), leaking one message.
If its peek()+remove(), it might double entry, but not loose it. Its a trade off.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You're right about it. My question is now is hot to prevent both scenarios. Is there some way we can synchronize just a little bit to prevent a duplicate entry in the log? I think we must prevent an output that will be considered buggy by users if someone finds a duplicate entry after shutting down the server.

closed = true;
if (stateUpdater.compareAndSet(this, 0, 1)) {
logWriteExecutor.execute(this);
this.closed = true;
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

method must take into account that the state could change between one if and the other. So, there is a possibility that all stateUpdater checks return false.

try {
while (!this.pendingMessages.isEmpty()) {
final String msg = this.pendingMessages.poll();
// TODO: clarify this, how is this possible?
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It might not be possible. Lets wait for the final version of the code of run() and close() to decide this

@fl4via fl4via added the new feature/API change New feature to be introduced or a change to the API (non suitable to minor releases) label Feb 16, 2021
@fl4via
Copy link
Member

fl4via commented Feb 16, 2021

as this is a new feature, merging of this PR will require synchronization with the RFE process. I'll assist in having all the process followed as fast as we can to have this merged the soonest.

fl4via
fl4via previously requested changes Mar 21, 2023
doRotate();
private void writeMessage(final String message) {
//NOTE: is there a need to rotate on write?
//if (System.currentTimeMillis() > changeOverPoint) {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, but only once per run. Now that this method is called per message it is no longer the adequate place for that.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Probably we should add the limit to 1000 here? I think it would perform a bit better.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I still stand my ground, I think the granularity of checking 1000 here performs better.

@fl4via fl4via added the waiting PR update Awaiting PR update(s) from contributor before merging label Mar 21, 2023
@baranowb baranowb requested a review from fl4via April 14, 2023 11:31
void accessLogWorkerFailureOnTransition();

@LogMessage(level = DEBUG)
@Message(id = 6000, value = "Access Log Worker failed to reschedule.")
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Big brain math....

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

On the other hand - no one noticed :)

@baranowb baranowb added enhancement Enhances existing behaviour or code under verification Currently being verified (running tests, reviewing) before posting a review to contributor waiting peer review PRs that edit core classes might require an extra review and removed waiting PR update Awaiting PR update(s) from contributor before merging labels Mar 12, 2024
@baranowb baranowb dismissed fl4via’s stale review March 12, 2024 08:50

I cant find this one to mark it as outdated, lets reset.

Copy link
Member

@fl4via fl4via left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There are some pending issues here to consider.

//Log handler is closing, other resources should as well, there shouldn't
//be resources served that required this to log stuff into AL file.
throw UndertowMessages.MESSAGES.failedToLogAccessOnClose();
}
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

just a thought, has this been tested? Is it better to silently not log the message or to throw an exception?
My first idea is throwing an exception, but still I need to know if the consequences of this exception have been explored and double-checked as harmless.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Well. Its a tricky scenario. When this happens, every connection/exchange should/will be purged.
When exception is thrown, it potentially wont allow to close connection.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Also, note that its expected behavior from closed resource.

logWriteExecutor.execute(this);
for (int i = 0; i < 1000 && !this.closed && !pendingMessages.isEmpty(); ++i) {
final String msg = pendingMessages.peek();
if (msg == null) {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

it is not supposed to be null, because you are checking it is not empty

if (stateUpdater.compareAndSet(this, 3, 1)) {
logWriteExecutor.execute(this);
for (int i = 0; i < 1000 && !this.closed && !pendingMessages.isEmpty(); ++i) {
final String msg = pendingMessages.peek();
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I might be looking too much into details here, but why peeking and removing instead of just polling?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

in case there is some intermitent IO failure, it should not purge entries without writing them first. Though, in such case, some overflow mechanism should be in place as well?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Also "i < 1000 && !this.closed &&" checking this in loop might not be best, should be enough to loop through and check at the end IMHO

}
}
}finally {
// flush what we might have
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this is a very good point

}
return;
} else {
if (!pendingMessages.isEmpty() || forceLogRotation) {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I have been thinking... we still need a state for being at the finally block, as it prevents us from having more than one thread scheduling a new execution of run at the same time

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

how? if there is nothing in pendingMessages, it does not need scheduling, if someone writes, it will be able to schedule. In reality, pendingMessages are filled before state change, so its just a race between logMessage thread and run() to schedule new execution.

@@ -59,7 +58,7 @@ public class DefaultAccessLogReceiver implements AccessLogReceiver, Runnable, Cl
//0 = not running
//1 = queued
//2 = running
//3 = final state of running (inside finally of run())
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I added a comment down below that we will still need state 3 to be used to keep tracking of being at finally of run)
That being said what are we trying to achieve with the new state 3? (It will have to be moved to 4 if we decide to keep it)> Are we trying to prevent run from doing anything after closed is invoked? Is that its purpose?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

okay, it is clearly stated in the Jira: we want to make sure that close returns only after it is fully closed. The question that remains is if we need a new state for it, I think we need so we can fully skip the rest of the execution of run() in case it is closing, without having to resort to verifying the value of close.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I dont think so. Its a bit different FSM/CAS that was present before. It relied only on worker threads to dump messages even when handler has been closed and once its done close. This cant be done now since those worker threads may be disabled( upstream issue)

doRotate();
private void writeMessage(final String message) {
//NOTE: is there a need to rotate on write?
//if (System.currentTimeMillis() > changeOverPoint) {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I still stand my ground, I think the granularity of checking 1000 here performs better.

@baranowb baranowb force-pushed the UNDERTOW-1794 branch 2 times, most recently from 4e48ee9 to 566c11f Compare May 8, 2024 15:49
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement Enhances existing behaviour or code new feature/API change New feature to be introduced or a change to the API (non suitable to minor releases) under verification Currently being verified (running tests, reviewing) before posting a review to contributor waiting peer review PRs that edit core classes might require an extra review
Projects
None yet
2 participants