Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Reaper still postponing segments #85

Closed
kamenik opened this issue Apr 27, 2017 · 16 comments
Closed

Reaper still postponing segments #85

kamenik opened this issue Apr 27, 2017 · 16 comments

Comments

@kamenik
Copy link

kamenik commented Apr 27, 2017

Hi guys,

I am trying to use reaper at test cluster, but I am stuck with this issue. After few test runs it
starts to postpone segment repairs, it seems that it tries to start same segment twice, second run fails
and is postponed. I tried to delete reaper_db ks to reset it, but it did not help. Any idea?

Few lines from beginning of log:

INFO   [2017-04-27 13:35:55,188] [dw-45 - PUT /repair_run/1?state=RUNNING] c.s.r.r.RepairRunResource - modify repair run state called with: id = 1, state = Optional.of(RUNNING)
INFO   [2017-04-27 13:35:57,591] [dw-45 - PUT /repair_run/1?state=RUNNING] c.s.r.r.RepairRunResource - Starting run 1
INFO   [2017-04-27 13:35:57,591] [dw-45 - PUT /repair_run/1?state=RUNNING] c.s.r.s.RepairManager - Starting a run with id #1 with current state 'NOT_STARTED'
INFO   [2017-04-27 13:35:57,614] [dw-45 - PUT /repair_run/1?state=RUNNING] c.s.r.s.RepairManager - scheduling repair for repair run #1
INFO   [2017-04-27 13:36:00,455] [woc:1] c.s.r.s.RepairRunner - Repairs for repair run #1 starting
INFO   [2017-04-27 13:36:02,849] [woc:1] c.s.r.s.RepairRunner - Next segment to run : 1
INFO   [2017-04-27 13:36:03,370] [woc:1:1] c.s.r.s.SegmentRunner - It is ok to repair segment '1' on repair run with id '1'
INFO   [2017-04-27 13:36:03,372] [woc:1:1] c.s.r.c.JmxProxy - Triggering repair of range (-9214277316332016863,-9205713572941514142] for keyspace "woc" on host 192.168.20.19, with repair parallelism dc_parallel, in cluster with Cassandra version '3.10' (can use DATACENTE
R_AWARE 'true'), for column families: []
INFO   [2017-04-27 13:36:03,388] [woc:1:1] c.s.r.s.SegmentRunner - Repair for segment 1 started, status wait will timeout in 1800000 millis
INFO   [2017-04-27 13:36:06,659] [woc:1] c.s.r.s.RepairRunner - Next segment to run : 1
ERROR  [2017-04-27 13:36:06,750] [woc:1:1] c.s.r.s.SegmentRunner - SegmentRunner already exists for segment with ID: 1
WARN   [2017-04-27 13:36:06,772] [woc:1:1] c.s.r.s.SegmentRunner - Failed to connect to a coordinator node for segment 1
com.spotify.reaper.ReaperException: SegmentRunner already exists for segment with ID: 1
        at com.spotify.reaper.service.SegmentRunner.runRepair(SegmentRunner.java:154)
        at com.spotify.reaper.service.SegmentRunner.run(SegmentRunner.java:95)
        at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
        at com.google.common.util.concurrent.TrustedListenableFutureTask$TrustedFutureInterruptibleTask.runInterruptibly(TrustedListenableFutureTask.java:108)
        at com.google.common.util.concurrent.InterruptibleTask.run(InterruptibleTask.java:41)
        at com.google.common.util.concurrent.TrustedListenableFutureTask.run(TrustedListenableFutureTask.java:77)
        at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
        at java.util.concurrent.FutureTask.run(FutureTask.java:266)
        at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:180)
        at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
        at java.lang.Thread.run(Thread.java:748)
INFO   [2017-04-27 13:36:06,777] [woc:1:1] c.s.r.s.SegmentRunner - Postponing segment 1
WARN   [2017-04-27 13:36:06,778] [woc:1:1] c.s.r.s.SegmentRunner - Open files amount for process: 106
ERROR  [2017-04-27 13:36:06,785] [woc:1:1] c.s.r.s.RepairRunner - Executing SegmentRunner failed: timeout value is negative
INFO   [2017-04-27 13:36:10,067] [woc:1] c.s.r.s.RepairRunner - Next segment to run : 2
INFO   [2017-04-27 13:36:10,331] [woc:1:2] c.s.r.s.SegmentRunner - SegmentRunner declined to repair segment 2 because one of the hosts (192.168.20.19) was already involved in a repair
INFO   [2017-04-27 13:36:13,937] [woc:1] c.s.r.s.RepairRunner - Next segment to run : 2
WARN   [2017-04-27 13:36:14,018] [woc:1:2] c.s.r.s.SegmentRunner - A host (192.168.20.19) reported that it is involved in a repair, but there is no record of any ongoing repair involving the host. Sending command to abort all repairs on the host.
INFO   [2017-04-27 13:36:14,049] [woc:1:2] c.s.r.s.SegmentRunner - Postponing segment 2
WARN   [2017-04-27 13:36:14,061] [woc:1:1] c.s.r.s.SegmentRunner - repair session failed for segment with id '1' and repair number '1'
INFO   [2017-04-27 13:36:14,064] [woc:1:1] c.s.r.s.SegmentRunner - Postponing segment 1
INFO   [2017-04-27 13:36:14,078] [woc:1:1] c.s.r.s.SegmentRunner - Repair command 1 on segment 1 returned with state NOT_STARTED
ERROR  [2017-04-27 13:36:14,082] [woc:1:1] c.s.r.s.RepairRunner - Executing SegmentRunner failed: timeout value is negative
INFO   [2017-04-27 13:36:14,298] [woc:1:2] c.s.r.s.SegmentRunner - It is ok to repair segment '2' on repair run with id '1'
INFO   [2017-04-27 13:36:14,308] [woc:1:2] c.s.r.c.JmxProxy - Triggering repair of range (-9205713572941514142,-9201360333458439030] for keyspace "woc" on host 192.168.20.21, with repair parallelism dc_parallel, in cluster with Cassandra version '3.10' (can use DATACENTE
R_AWARE 'true'), for column families: []
INFO   [2017-04-27 13:36:14,349] [woc:1:2] c.s.r.s.SegmentRunner - Repair for segment 2 started, status wait will timeout in 1800000 millis
INFO   [2017-04-27 13:36:17,316] [woc:1] c.s.r.s.RepairRunner - Next segment to run : 2
ERROR  [2017-04-27 13:36:17,375] [woc:1:2] c.s.r.s.SegmentRunner - SegmentRunner already exists for segment with ID: 2
WARN   [2017-04-27 13:36:17,378] [woc:1:2] c.s.r.s.SegmentRunner - Failed to connect to a coordinator node for segment 2
com.spotify.reaper.ReaperException: SegmentRunner already exists for segment with ID: 2
        at com.spotify.reaper.service.SegmentRunner.runRepair(SegmentRunner.java:154)
        at com.spotify.reaper.service.SegmentRunner.run(SegmentRunner.java:95)
        at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
        at com.google.common.util.concurrent.TrustedListenableFutureTask$TrustedFutureInterruptibleTask.runInterruptibly(TrustedListenableFutureTask.java:108)
        at com.google.common.util.concurrent.InterruptibleTask.run(InterruptibleTask.java:41)
        at com.google.common.util.concurrent.TrustedListenableFutureTask.run(TrustedListenableFutureTask.java:77)
        at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
        at java.util.concurrent.FutureTask.run(FutureTask.java:266)
        at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:180)
        at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
        at java.lang.Thread.run(Thread.java:748)
INFO   [2017-04-27 13:36:17,383] [woc:1:2] c.s.r.s.SegmentRunner - Postponing segment 2
WARN   [2017-04-27 13:36:17,383] [woc:1:2] c.s.r.s.SegmentRunner - Open files amount for process: 111
ERROR  [2017-04-27 13:36:17,387] [woc:1:2] c.s.r.s.RepairRunner - Executing SegmentRunner failed: timeout value is negative
INFO   [2017-04-27 13:36:21,291] [woc:1] c.s.r.s.RepairRunner - Next segment to run : 3
INFO   [2017-04-27 13:36:21,634] [woc:1:3] c.s.r.s.SegmentRunner - SegmentRunner declined to repair segment 3 because one of the hosts (192.168.20.19) was already involved in a repair
@adejanovski
Copy link
Contributor

Hi @kamenik,

I've seen this before indeed. It seems like something got stuck on your cluster, maybe a validation compaction, and that prevents Reaper from running anymore repairs.
Please roll restart your cluster to get it back into a clean state and then try running a repair again.
The negative timeout value bug is fixed in the current master branch (updated today), so you can use it instead of your current version to prevent that particular error from popping up again (plus it can have some other nasty effects).

Let me know how it works after the roll restart and if the problem still shows up with the latest master.

Thanks

@kamenik
Copy link
Author

kamenik commented Apr 27, 2017

I have deleted reaper_db and restarted cassandra cluster but it is still the same. Good news - the negative timeout value bug disappeared:).

@adejanovski
Copy link
Contributor

Could you try to set logging level to DEBUG in reaper and send me the output after a few minutes of run ?
Also, could you give me the output of nodetool tpstats on the node that is reported as already having a running repair ?
I would also need the output of bin/spreaper list-runs to check the status of running/past repairs.

Thanks

@kamenik
Copy link
Author

kamenik commented Apr 28, 2017

Oki, is is after restart and with clean reaper_db.

nodetool_tpstats.txt
spreaper_list_runs.txt
cassandra-reaper.log.zip

@adejanovski
Copy link
Contributor

According to what I see in your outputs, things are currently going ok.
Reaper won't allow multiple repair sessions on the same node, so if it tries to start repairing a new segment which involves a node that is already repairing another segment, then it will be postponed.

What we currently see is very different from what you had previously with Reaper claiming it sees repair running on a node but has no trace in its own database and then tries to kill it. This was likely to be related to the negative timeout bug.

So far, I'd say things look good.

Please update the ticket with the progress.

Thanks

@kamenik
Copy link
Author

kamenik commented Apr 28, 2017

Thank you for help. I would like to ask about reaper performance yet. I run one not sheduled repair with default values and surprisingly it took much longer time than full repair at every server and even consumed much more CPU. First five peaks are full repairs, data since 15:45 to 19:00 are reaper repair. What do you thing about it?

reaper_cpu_consumption

@adejanovski
Copy link
Contributor

There are several possibilities here.
Could you share the specific command line you've used for you full repair ? Did you run it on all nodes at the same time ?
Can you confirm you're using Cassandra 3.10 ?

Another possibility could be that there are performance problems with the Cassandra backend in Reaper.
Was the Reaper UI opened the whole time on the "Repair runs" screen ? If so, can you check CPU consumption when the window isn't opened ?
While a repair is running in Reaper, could you share the output of :

  • nodetool tablestats reaper_db
  • nodetool tablehistograms reaper_db repair_segment

You can try to use another backend, like H2 for example, and compare results.
You can also lower the intensity to 0.5 in order to check how it affects the load.

Does the first part of the graph (the 5 spikes) show the total time for the full repair to run (on all nodes) ?

@kamenik
Copy link
Author

kamenik commented May 2, 2017

nodetool --host NODE_IP repair --full

it is called from one server on all nodes, one after another. We have Cassandra 3.10 on all servers.
At the graph - spikes between 14:05 and 15:15 is full repair of all five nodes.

I will try to run it with different backend, we will see.. Trying H2 now, it has lots of this stacktraces in log, but it is running.

WARN   [2017-05-02 09:56:45,787] [woc:2:1408] c.s.r.s.SegmentRunner - SegmentRunner declined to repair segment 1408 because of an error collecting information from one of the hosts (192.168.20.17): {}
java.lang.ClassCastException: [Ljava.lang.Object; cannot be cast to [Ljava.lang.String;
        at com.spotify.reaper.storage.postgresql.RepairParametersMapper.map(RepairParametersMapper.java:33)
        at com.spotify.reaper.storage.postgresql.RepairParametersMapper.map(RepairParametersMapper.java:28)
        at org.skife.jdbi.v2.Query$4.munge(Query.java:183)
        at org.skife.jdbi.v2.QueryResultSetMunger.munge(QueryResultSetMunger.java:41)
        at org.skife.jdbi.v2.SQLStatement.internalExecute(SQLStatement.java:1344)
        at org.skife.jdbi.v2.Query.fold(Query.java:173)
        at org.skife.jdbi.v2.Query.list(Query.java:82)
        at org.skife.jdbi.v2.sqlobject.ResultReturnThing$IterableReturningThing.result(ResultReturnThing.java:253)
        at org.skife.jdbi.v2.sqlobject.ResultReturnThing.map(ResultReturnThing.java:43)
        at org.skife.jdbi.v2.sqlobject.QueryHandler.invoke(QueryHandler.java:41)
        at org.skife.jdbi.v2.sqlobject.SqlObject.invoke(SqlObject.java:224)
        at org.skife.jdbi.v2.sqlobject.SqlObject$3.intercept(SqlObject.java:133)
        at org.skife.jdbi.v2.sqlobject.CloseInternalDoNotUseThisClass$$EnhancerByCGLIB$$e2535170.getRunningRepairsForCluster(<generated>)
        at com.spotify.reaper.storage.PostgresStorage.getOngoingRepairsInCluster(PostgresStorage.java:362)
        at com.spotify.reaper.service.SegmentRunner$1.initialize(SegmentRunner.java:169)
        at com.spotify.reaper.service.SegmentRunner$1.initialize(SegmentRunner.java:165)
        at org.apache.commons.lang3.concurrent.LazyInitializer.get(LazyInitializer.java:100)
        at com.spotify.reaper.service.SegmentRunner.canRepair(SegmentRunner.java:300)
        at com.spotify.reaper.service.SegmentRunner.runRepair(SegmentRunner.java:178)
        at com.spotify.reaper.service.SegmentRunner.run(SegmentRunner.java:95)
        at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
        at com.google.common.util.concurrent.TrustedListenableFutureTask$TrustedFutureInterruptibleTask.runInterruptibly(TrustedListenableFutureTask.java:108)
        at com.google.common.util.concurrent.InterruptibleTask.run(InterruptibleTask.java:41)
        at com.google.common.util.concurrent.TrustedListenableFutureTask.run(TrustedListenableFutureTask.java:77)
        at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
        at java.util.concurrent.FutureTask.run(FutureTask.java:266)
        at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:180)
        at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
        at java.lang.Thread.run(Thread.java:748)

@kamenik
Copy link
Author

kamenik commented May 4, 2017

Yesterday results - there we no traffic on cluster, only repairing. All repairs were paralel, full, intensity .95, only difference was backend. C* data (csstats.txt) are from the middle of the first C* run, plus minus.. It seems there is no (small) difference between run with UI and without it.

csstats.txt

chm

@adejanovski
Copy link
Contributor

Hi @kamenik ,

thanks for the results !
There's obviously a big overhead related to the Cassandra backend that we're going to optimize ASAP.
I'll get back to you as soon as we have something to test on that front.

Thanks again for the time you spent on this !

@kamenik
Copy link
Author

kamenik commented May 5, 2017

No problem:). I run it with cassandra only today and it seems there is some problem with intensity settings too. You can see on graph - I set intensity 0.95, 0.5, 0.25, 0.125, 0.0625 (there is just beginning of it). Plus it says all segments are repaired some time before switch to state DONE - it is marked by red lines, some DB cleanup at the end?

intentsity

@michaelsembwever
Copy link
Member

Thank you for help. I would like to ask about reaper performance yet.

At this point the ticket changed from being one about postponed segments, now resolved, to reaper performance using the Cassandra backend.

Could we either close this ticket, and move the comments into a new ticket, or rename this ticket?

michaelsembwever added a commit that referenced this issue May 8, 2017
…ing.

Example of logging queries to a separate file found in cassandra-reaper-cassandra.yaml

ref: #85
michaelsembwever added a commit that referenced this issue May 8, 2017
…ing.

Example of logging queries to a separate file found in cassandra-reaper-cassandra.yaml

ref: #85
adejanovski pushed a commit that referenced this issue May 10, 2017
…ing. (#93)

Example of logging queries to a separate file found in cassandra-reaper-cassandra.yaml

ref: #85
@kamenik kamenik closed this as completed May 10, 2017
@adejanovski
Copy link
Contributor

Following up on this in #94

@adejanovski
Copy link
Contributor

@kamenik : I've created a branch that fixes the performance issues you've been experiencing with the Cassandra backend.

Could you build an try with the following branch ? https://github.com/thelastpickle/cassandra-reaper/tree/alex/fix-parallel-repair-computation

TL;DR : the number of parallel repairs was computed based on the number of tokens and not the number of nodes. If you use vnodes, Reaper will compute a high value and you'll have 15 threads competing to run repairs on your cluster. I've fixed this using the number of nodes instead and added a local cache to lighten the load on C*.

Thanks

@kamenik
Copy link
Author

kamenik commented May 31, 2017

@adejanovski: Thanx, it is much better now:). Big peak at the beginning is full repair, the rest is reaper with C* backend. Interesting is there is no difference between run with intensity 0.95 and 0.1 .

Intensity 0.95

reaper_0_95

Intensity 0.1

reaper_0_10

@adejanovski
Copy link
Contributor

Great to see the improvements on your charts.

Intensity mustn't apply here because your segments are very fast to run (within seconds I guess). if you spend 1s repairing then intensity 0.1 will wait for : (1/0.1 - 1) = 9s.
Then you have a 30s pause between each segment that mostly explains why it takes much longer with Reaper than the full nodetool repair in your case.
That 30s pause will be configurable in the yaml file once I merge the mck/cassandra-improvements-94 branch, which should happen this week.
If you have 1000 segments, that 30s pause will already bring the repair time to more than 8h.

The upcoming merge will bring many more improvements on the data model and make good use of the Cassandra row cache for segments.

michaelsembwever added a commit that referenced this issue Jun 27, 2017
…ing. (#93)

Example of logging queries to a separate file found in cassandra-reaper-cassandra.yaml

ref: #85
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants