Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

release 1.0.0 and follow semver #120

Open
nevans opened this issue Jan 28, 2015 · 10 comments
Open

release 1.0.0 and follow semver #120

nevans opened this issue Jan 28, 2015 · 10 comments

Comments

@nevans
Copy link
Collaborator

nevans commented Jan 28, 2015

Although we claim semver, we still don't have a version 1.0.0 (which is required for semver). :)

If everything goes well with the RC I pushed last night, I'll make that 0.5.0 next Tuesday. I've held off on v1.0.0 because of the many warts, but since I've been using this in production (as-is) for several years, I think those warts aren't enough to prohibit a first release.

What will hold off a public release is if there is if there are any backwards compatible changes that "need" to be made. I'd rather make them now than make a 1.0 release that is only supported for a couple of weeks/months (or backport bugfixes).

If there are no backwards-incompatible changes to be made, I'll just rename 0.5.0 as 1.0.0 a week or so after 0.5.0 is released.

@nevans
Copy link
Collaborator Author

nevans commented Jan 29, 2015

In particular, I'd like to analyze the most significant or longest lived forks and see if there's anything we need to merge from them that might impact backwards compatibility. If so, let's merge them before 1.0.0. If not, we can merge them for 1.1 or 1.2.

After a quick perusal of the fork-network (most of which have been inactive for over a year), these seem to be the most significant branches that haven't sent in pull requests (and some that have):

This is not a commitment to merge in every single change they've made (we might want to make their changes easy to do as plugins or configuration rather than core part of resque-pool). It's a commitment to research them for backwards incompatibility, and plan the shortest course to semver 1.0.0 (that won't require us to release 2.0.0 in order to support what these forks are doing).

@haruska
Copy link
Contributor

haruska commented Jan 29, 2015

I'm unsure if backupify is still using resque-pool. That said, those changes to ensure you don't 2x your worker count on restart are probably still valuable to those using resque.

@ono
Copy link

ono commented Feb 3, 2015

Thanks for the heads-up. We wanted to monitor memory usage of each worker processes with monit. Monit requires pid file and my patch gives the sequential name like resuque-pool.worker.1.pid, resuque-pool.worker.2.pid... to pid files. It also considers when a worker process gets killed and resque-pool spawn a process again. In that case, it has to rewrite the existing pid file instead of creating a new one.

The reasons I didn't send PR were:

  1. implementation is hacky
  2. the use case is limited so I supposed that you didn't want to maintain

Besides we are also sunsetting a sub-system which is currently using it. So please don't worry about my fork.

@joshuaflanagan
Copy link
Contributor

The significant work in the PeopleAdmin/ShippingEasy forks has already been
submitted as a PR:
#85

I still think that PR has widespread value and should be pulled in. We are
still actively using our forks.

On Tue, Feb 3, 2015 at 4:48 PM, Tatsuya Ono notifications@github.com
wrote:

Thanks for the heads-up. We wanted to monitor memory usage of each worker
processes with monit. Monit requires pid file and my patch gives the
sequential name like resuque-pool.worker.1.pid, resuque-pool.worker.2.pid...
to pid files. It also considers when a worker process gets killed and
resque-pool spawn a process again. In that case, it has to rewrite the
existing pid file instead of creating a new one.

The reason I didn't send PR was:

  1. implementation is hacky
  2. the use case is limited so I supposed that you didn't want to
    maintain

Besides we are also sunsetting a sub-system which is currently using it.
So please don't worry about my fork.


Reply to this email directly or view it on GitHub
#120 (comment).

@nevans
Copy link
Collaborator Author

nevans commented Oct 5, 2015

The most important of these branches (custom config loader) was merged and released in 0.6.0.rc1. Please help test it (and the other changes) out, if you can.

@joshuaflanagan
Copy link
Contributor

I'll try to get this release deployed soon, to test. For what its worth, the custom config loader code has been running in production at 2 different companies for over a year. One company uses the custom config loader feature, and the other relies on the default yml file config, and both scenarios have been running fine with that code.

@phuongnd08
Copy link

Sorry for being nosy, but how that custom config loader would benefit the mass?

@joshuaflanagan
Copy link
Contributor

@phuongnd08 A lot of teams are constrained by the static yaml configuration. See #69, #62, and #99 as some examples of people trying to work around the static config. The custom config loader offers an extension point to solve all of those problems, without requiring resque-pool to take on code specific to those solutions.

@phuongnd08
Copy link

Thank you. Perhaps I could use this to tackle #136

@phuongnd08
Copy link

Just to clear thing up: If number of worker assigned to a queue is currently 1, on a configuration reload I set it to 0, would it terminate the current executing task or would it wait until the task completed?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

5 participants