Skip to content
This repository was archived by the owner on Jul 19, 2025. It is now read-only.

Add concurrency #33

Merged
merged 1 commit into from
Nov 3, 2015
Merged

Add concurrency #33

merged 1 commit into from
Nov 3, 2015

Conversation

brynary
Copy link
Member

@brynary brynary commented Oct 30, 2015

Adds thread-based concurrency to the parsing and process_sexp phase (but not the "report" phase at the end). Concurrency is implemented using a vanilla Ruby Queue and threads. The concurrency is configurable in the engine config (which I think we will want to add to the spec, and defaults to 2).

This uses the Concurrent::Array, Concurrent::Hash and Concurrent::Map classes from the concurrent-ruby gem for thread safe data structures in Flay (which is why I unpacked flay.rb from the Gem dependency to patch it.

We should consider submitting a concurrency patch upstream after we test this more. It will require some re-work but it's manageable.

Note: The last step is switching the Ruby interpreter from MRI to JRuby in the Dockerfile (but this should produce wins even on MRI for non-Ruby languages due to the concurrent parsing in the e.g. Node.js processes we boot).

/c @jpignata

@brynary
Copy link
Member Author

brynary commented Oct 30, 2015

The CircleCI failure here looks unrelated. Passes locally.

@@ -14,6 +14,10 @@ def languages
config.fetch("languages", {})
end

def concurrency
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is this something we want users to be able to configure?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes definitely, though we should probably consider having some max in place

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah, that was my concern. Don't want someone going overboard with concurrency.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'd cap it at 2 for now. Later, we'll probably make this externally configurable by a build runner so that we can set the max to other values.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

👍

@BlakeWilliams
Copy link
Contributor

Instead of pulling in flay directly, what do you think about forking flay and making changes in that repo?

@BlakeWilliams
Copy link
Contributor

Alright, added some changes:

@BlakeWilliams
Copy link
Contributor

cc @codeclimate/review

Thread.new do
begin
yield queue.pop(true)
rescue ThreadError

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Since we're rescuing here, how would you feel about logging that this happened?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah, that's a good idea. I'll print something to stderr.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This rescue is actually just for when queue.pop runs out of files to pop, so I don't think it should log to stderr.

@MattMSumner
Copy link

Few comments otherwise this LGTM.

This PR swaps out flay with our own fork of flay that uses
concurrency-ruby classes instead of stdlib `Hash`.

This also introduces a `FileThreadPool` class for running blocks on an
array of files concurrently.
@BlakeWilliams BlakeWilliams merged commit ce27d9e into master Nov 3, 2015
@BlakeWilliams BlakeWilliams deleted the bh/concurrency branch November 3, 2015 16:16
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants