-
Notifications
You must be signed in to change notification settings - Fork 1
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
My approach to testing would be to extract the grunt commands into something that can be replaced with simple subprocess that is either a short sleep or an exception.
I'd suggest slugs being created in parallel, but deploys done serially in order of importance (since they are quick, and I'd fail if they'd fail on staging before going to prod).
I think the command should definitely fail..as in, when running in parallel, finish what you can, but do not continue with stuff that follow. As in, I don't think we should continue with deploy if any of the slug builds fail.
blackbelt/handle_github.py
Outdated
print('\n') | ||
|
||
while len(commands): | ||
nextRound = [] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Please adhere to pep8, next_round
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Leftover, it is...
blackbelt/handle_github.py
Outdated
|
||
commands = [] | ||
for command in grunt_commands: | ||
with tempfile.NamedTemporaryFile(delete=False) as f: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why this over mkstemp
?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Because I wasn't aware about it ;-) Since I don't need delete
attribute for NamedTemporaryFile, I can switch.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Well... I'll probably stay with NamedTemporaryFile
; mkstemp
doesn't support context manager (a.k.a. with ...
), therefore - AFAIK - I'd need to take care of closing the handler, which would complicate whole flow.
@Almad Thanks for the ideas! Let's clarify one thing:
This is definitely valid for I'd suggest splitting it into |
@miiila hmm, why can't |
Because we are waiting for tests to pass (and I’d like to keep it).
…On 15 Jan 2018, 11:59 +0200, Almad ***@***.***>, wrote:
@miiila hmm, why can't deploy work internally as described as well?
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub, or mute the thread.
|
@miiila I still don't get it. Run creating slug in parallel and wait for tests to run, once all of this has been done successfully, do a serial deploy and fail / stop deploy if any of those fails. Anything I am missing on why this can't be done? |
It definitely can be done :-) All I want to discuss is the fact, that current setup doesn’t wait for a tests in ‘grunt production’ workflow.
…On 15 Jan 2018, 15:22 +0200, Almad ***@***.***>, wrote:
@miiila I still don't get it. Run creating slug in parallel and wait for tests to run, once all of this has been done successfully, do a serial deploy and fail / stop deploy if any of those fails.
Anything I am missing on why this can't be done?
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub, or mute the thread.
|
f29aa8d
to
6465aa7
Compare
blackbelt/deployment.py
Outdated
check_call(['grunt', 'deploy']) | ||
check_call(['grunt', 'deploy', '--app=apiary-staging-pre']) | ||
check_call(['grunt', 'deploy', '--app=apiary-staging-qa']) | ||
slug_creaction_rc = run_grunt_in_parallel((['grunt', 'create-slug'], ['grunt', 'create-slug', '--app=apiary-staging-pre'], ['grunt', 'create-slug', '--app=apiary-staging-qa'])) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'd put the tuple into several lines for better readability:
slug_creaction_rc = run_grunt_in_parallel((
[...],
[...],
[...],
[...],
)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Also, from reading just this part of the code, I'm not quite sure what slug_creaction_rc
means and what are its contents. I can see further below it's tested for not being zero, so I guess it's a count of something, but the name doesn't help me to understand why/what.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Changed to longer, but more revealing name, thank you.
blackbelt/deployment.py
Outdated
check_call(['grunt', 'deploy', '--app=apiary-staging-qa']) | ||
slug_creaction_rc = run_grunt_in_parallel((['grunt', 'create-slug'], ['grunt', 'create-slug', '--app=apiary-staging-pre'], ['grunt', 'create-slug', '--app=apiary-staging-qa'])) | ||
if slug_creaction_rc != 0: | ||
post_message("Slug creation failed, deploy stopped.", "#deploy-queue") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks like mixing tabs and spaces in the indentation.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Interesting, I haven't done any changes, but it looks better now :-)
blackbelt/handle_github.py
Outdated
commands = [] | ||
for command in grunt_commands: | ||
with tempfile.NamedTemporaryFile(delete=False) as f: | ||
app = command[2].split('=')[1] if len(command) > 2 else 'production' |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Probably good enough for now, but generally this seems fragile to me. Maybe it could look specificialy for --app=
or -a
? Or maybe I'm not getting the line right.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Also the line doesn't seem to be tested.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm sorry for jump-in comments. I wanted to help reviewing the last commit, but didn't want to interfere with @Almad in the overall review. I focused mainly on the Python code itself, not on the logic that much. Cheers.
So I added test for parallel tasks and polished other parts. I'd love to test it better than it is now (6465aa7#diff-89f24b3a707f227aaff587cda0f8268aR145), but it's far behind my So I'd like to ask @Almad for another round of review. Thx. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It's a shame we support Python 2 😉. This would be rather simpler/more elegant to implement using Python 3 asyncio, since subprocesses can be coroutines. They can be waited on easily as a group.
for command in grunt_commands: | ||
with tempfile.NamedTemporaryFile(delete=False) as f: | ||
app = get_grunt_application_name(' '.join(command)) | ||
commands.append({'app': app, 'process': Popen(command, stdout=f), 'log': f.name}) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm not sure how this works, but since f
the file descriptor is opened with a context (with ... as f
) the file descriptor should be closed at the end of the with context.
However, the process will be running and potentially sending data to the file descriptor for longer than the with context after the file descriptor was closed.
Is there some magic I am missing? Perhaps due to Popen
implementation details this is working somehow.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I believe magic this could be the answer (https://stackoverflow.com/a/38959232).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Still, the file descriptor should be closed outside of the with context:
>>> import tempfile
>>> with tempfile.NamedTemporaryFile(delete=False) as fp:
... pass
...
>>> fp
<closed file '<fdopen>', mode 'w+b' at 0x1038ae6f0>
>>> fp.write('x')
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
ValueError: I/O operation on closed file
I guess the implementation of Popen
is somehow re-opening or opening from the filename afterwards.
blackbelt/handle_github.py
Outdated
while len(commands): | ||
next_round = [] | ||
for command in commands: | ||
rc = command['process'].poll() |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If I understand correctly, poll
is not blocking and thus this while loop will be a tight loop running many times while waiting for the processes. Perhaps it would make sense to add some sleep for a second or so to the outter loop to prevent high CPU load for the user.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Good idea, I believe 5 seconds will be OK.
need rebase with master |
Signed-off-by: miiila <mila.votradovec@gmail.com>
c333dc3
to
b510dd4
Compare
@kylef @honzajavorek @Almad can you make final review to merge this PR? |
@kylef @honzajavorek @Almad Anything blocking there? We need finalize this review to merge this PR? |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
After setting up new environments (staging-qa, staging-pre), black-belt was modified to handle deploys to all of them. Unfortunately, it takes some time and is not so efficient.
This PR present rough idea how to deal with it. Since I am not a skilled python dev, I'd like to hear your feedback.
My pain points:
Thanks for a help to make black-belt usable again.