-
Notifications
You must be signed in to change notification settings - Fork 445
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Tablet unload impacted by long-running compaction cancellation #4485
Comments
@dtspence - is it the case that you manually cancelled the compactions? If so, did your command complete, or did it hang too? |
The FileCompactor checks if the compaction is still enabled for every key that it writes. I'm curious if the compaction was making progress (you said filtering was not expected which could also be a cause). Is this happening often? If not, is bouncing the tserver an option? |
No, the compaction eventually logs that it has canceled. We have not been taking manual intervention.
Yes, the issue is re-appearing. It does not appear to be localized to a single t-server. We have been wondering if something tablet related that correlates to the issue. At least one tablet we were looking at was a hot-spot and contained a lot of i-files from imports. |
I think that log message might be from the Manager continuing to tell the TabletServer to unload the tablet. |
I'm still thinking that maybe the compaction is not making progress. I don't think there is good logging for this with compactions that run in the Tablet Server. IIRC, the way to tell if it's making progress is to check out the output file for the compaction in HDFS and see if its size is increasing. If nothing is getting written to the file for a long time, then either it's filtering out a lot of data, or it's waiting on input from HDFS. |
I would love to have something in place to avoid compaction from holding up unloading of tablets. Is this something that is relative easy to do? This would save us from long shutdowns as well. |
It's not a switch that exists today. We would need to develop and test a solution. If you can identify which compactions are causing a tablet not to close, then you could run them as External Compactions. The existing codepath does not wait for the External Compactions to complete. It only waits for them if they are in the process of committing their changes to the tablet metadata. |
We believe the compaction is not emitting a key/value to respond to the cancellation. We also observed that the issue appears to occur when a tablet splits (due to growth). In all (or most) of the observed instances the following occurs:
|
If we introduced a delay in starting compactions for newly split tablets, then that might help mitigate this issue. We would have to make the delay time configurable most likely as timing the balancer can't really be done. |
In 2.1, |
Delaying starting compactions after split could negatively impact scans. It may be hard to calibrate the delay to avoid scan and compaction cancel problems. We could attempt to repeatedly interrupt the compaction thread. Currently an atomic boolean is used instead of interrupting the thread because in the past some external code would eat interrupts. The atomic boolean is only checked when a key values is returned. I experimented with pushing the atomic boolean check lower in the iterator stack in the past and it had an impact on performance. The code could do both approaches for cancellation, add new code to repeatedly interrupt the compaction thread and leave the existing code that sets the atomic boolean. |
If we made it configurable at the table-level, then the user could make the decision as to whether or not to enable this, and how long the delay should be. We could also potentially ignore the timeout if the tablet is over the scan max files. |
The code would need to emit enough information through logging to know if the property value needs adjustment. The property could be dropped in elasticity as it would not longer be needed there. The property only addresses one potential cause of stuck compactions causing problems. There could be other cases where a stuck compaction is causing problems that was started by other means. This could be a user initiated compaction that is not returning data and preventing tablet unload or a tablet that did not recently split but is stuck in a system compaction that is preventing unload. |
This changes HeapIterator.next and MultiIterator.seek to throw an InterruptedIOException if the current Thread is interrupted, and modifies CompactableImpl.compact such that it creates a scheduled task to periodically check whether the majc should no longer be running and interrupts the thread. Fixes apache#4485
Added interrupt method on FileCompactor that is set to true in CompactableUtils when the majc env is no longer enabled. FileCompactor.interrupt sets an AtomicBoolean to true, and the reference to the AtomicBoolean is set on the RFiles. Only internal compactions will use this interrupt behavior as external compactions already interrupt the compaction thread when compaction cancellation is requested. Fixes apache#4485
Describe the bug
A tablet unload (i.e. due to migration request) may be delayed while a tablet is attempting to unload, but cannot due to pending compaction cancellations. We have observed that the tablet will wait for as long as 50+ minutes while compactions cancel.
Versions (OS, Maven, Java, and others, as appropriate):
To Reproduce
We are attempting to gather additional information to reproduce. Some preliminary information:
Expected behavior
Migration request should complete within some shorter time.
Screenshots
N/A
Additional context
The manager logs:
The text was updated successfully, but these errors were encountered: