Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

WARNING: canceling conflicted backends #90

Closed
MichaelDBA opened this issue Aug 5, 2016 · 10 comments
Closed

WARNING: canceling conflicted backends #90

MichaelDBA opened this issue Aug 5, 2016 · 10 comments

Comments

@MichaelDBA
Copy link
Collaborator

MichaelDBA commented Aug 5, 2016

I see that this can occur if timeout is exceeded, but is there anything in bad shape when it aborts this way? Or does pg_repack clean up gracefully, ie, table(s) are back in their original condition?

I was using 1.3.3 version when this occurred on pg 9.4.8, debian wheezy

@schmiddy
Copy link
Member

schmiddy commented Aug 5, 2016

The "canceling conflicted backends" message is pg_repack telling you that it doesn't want to wait any longer during either the initial setup phase or the final swap phase, and has resorted to killing other backends which appear to be interfering. There's no clean-up that pg_repack can or should do here; whatever other queries that pg_repack has killed off may need to retry their work at a later point.

@schmiddy schmiddy closed this as completed Aug 5, 2016
@MichaelDBA
Copy link
Collaborator Author

You said, "...and has resorted to killing other backends which appear to be interfering." pg_repack is trying to execute pg_terminate_backend() or pg_cancel_backend() on other pids? If so, can you specify a parameter that will stop pg_repack from doing this? If not, which is it, pg_cancel.. or pg_terminate?

@schmiddy
Copy link
Member

schmiddy commented Aug 9, 2016

The (simplified) logic is:

  • if more than wait_timeout * 2 seconds have elapsed, use pg_terminate_backend to terminate any backends which appear to be conflicting with pg_repack when we are attempting to acquire a lock.
  • else if more than wait_timeout seconds have elapsed, use pg_cancel_backend.
  • else if less than wait_timeout seconds have elapsed, wait patiently

So you could set wait_timeout to a large value to effectively avoid this behavior. We don't have a knob to disable the behavior entirely; it wouldn't be too hard to add, though I'm not sure if it'd be generally useful.

@MichaelDBA
Copy link
Collaborator Author

MichaelDBA commented Aug 9, 2016

Wow, I want to avoid pg_repack acting like a DBA by canceling statements or killing connections.

If I set the wait_timeout to some arbitrarily large value that would never get invoked, can I kill the pg_repack PIDs at that point without damaging the table(s) involved? Otherwise, I gotta be monitoring the process to make sure we do not hit these waiting conditions.

I think this is a strategic feature to have the option of pg_repack ending gracefully if it exceeds the wait timeout and not be allowed to cancel or terminate other PIDs.

My manager would not let me use pg_repack if he knew that it can go out and kill other PG Pids. There goes the cron jobs to clean up the db.

@Mark-Steben
Copy link

I use pg_repack extensively too and I like Mike's idea - end gracefully
after timeout without cancelling.

On Tue, Aug 9, 2016 at 6:13 PM, Michael Vitale notifications@github.com
wrote:

Wow, I want to avoid pg_repack acting like a DBA by canceling statements
or killing connections.

If I set the wait_timeout to some arbitrarily large value that would never
get invoked, can I kill the pg_repack PIDs at that point without damaging
the table(s) involved? Otherwise, I gotta be monitoring the process to make
sure we do not hit these waiting conditions.

I think this is a strategic feature to have the option of pg_repack ending
gracefully if it exceeds the wait timeout and not be allowed to cancel or
terminate other PIDs.


You are receiving this because you are subscribed to this thread.
Reply to this email directly, view it on GitHub
#90 (comment), or mute
the thread
https://github.com/notifications/unsubscribe-auth/AIYrvhVL1qKlGr2NW6fgQBMaA7byB94Yks5qePuggaJpZM4Jd4u0
.

Mark Steben
Database Administrator
@utoRevenue http://www.autorevenue.com/ | Autobase
http://www.autobase.net/
CRM division of Dominion Dealer Solutions
95D Ashley Ave.
West Springfield, MA 01089
t: 413.327-3045
f: 413.383-9567

www.fb.com/DominionDealerSolutions
www.twitter.com/DominionDealer
www.drivedominion.com http://www.autorevenue.com/

http://autobasedigital.net/marketing/DD12_sig.jpg

@schmiddy
Copy link
Member

Yeah, this seems like a reasonable request. Shouldn't be too hard to optionally bail out and call repack_cleanup() instead of killing backends inside lock_exclusive() if we are over wait_timeout. Some issues to sort out:

  • should the default wait_timeout be more than 60 if we give users some new --dont-kill-backends option?
  • should we similarly avoid kill_ddl() when attempting to take the initial ACCESS SHARE lock if the user has enabled --dont-kill-backends? kill_ddl() should really only be affecting competing ACCESS EXCLUSIVE locks being held on the table, i.e. any DDL commands on the target table, which should be a rare-or-never event if you're in the middle of repack'ing the table.

@jirihlinka
Copy link

@schmiddy : As this issue was closed after releaes of 1.3.4, may I ask You when there will be new version of pg_repack with functionality requested? My production use of pg_repack is heavily dependent on it the same way as @MichaelDBA is.
Thank You.

@MichaelDBA
Copy link
Collaborator Author

When will this functionality be available? I am also prohibited from using this in a production environment due to the invasive nature of this function killing other PIDs.

@MichaelDBA
Copy link
Collaborator Author

Since this issue is now closed, is this feature implemented now? If so, when will it be released?

@shiwangini
Copy link

You will have to specify --no-kill-backend parameter from: https://pgxn.org/dist/pg_repack/1.4.0/doc/pg_repack.html

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

5 participants