-
Notifications
You must be signed in to change notification settings - Fork 276
Add an endpoint to run the server processes end-to-end. #3544
Conversation
@gkowalski-google just FYI, I am going to close this PR in favor of another which I will open this morning. |
@gkowalski-google Please disregard my last. I want to complete this PR. I'll make the fix you suggested. |
All (the pull request submitter and all commit authors) CLAs are signed, but one or more commits were authored or co-authored by someone other than the pull request submitter. We need to confirm that all authors are ok with their commits being contributed to this project. Please have them confirm that by leaving a comment that contains only Note to project maintainer: There may be cases where the author cannot leave a comment, or the comment is not properly detected as consent. In those cases, you can manually confirm consent of the commit author(s), and set the ℹ️ Googlers: Go here for more info. |
37788a9
to
cba8ecb
Compare
CLAs look good, thanks! ℹ️ Googlers: Go here for more info. |
/gcbrun |
|
||
if model_status in ['SUCCESS', 'PARTIAL_SUCCESS']: | ||
for progress in client.scanner.run(scanner_name=None): | ||
pass |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If we are just passing here, can we just do below, instead of looping?
client.scanner.run(scanner_name=None)
client.notifier.run(inventory_id, 0)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
That would certainly be cleaner. Because the gRPC reply to a run request is a stream, we need to consume the entire stream before proceeding. Without the loop, the execution would continue without waiting for the scan to complete, for example.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for this new endpoint! Just a couple of nits to fix before merge. Otherwise, LGTM
client.switch_model(model_handle) | ||
|
||
if model_status not in ['SUCCESS', 'PARTIAL_SUCCESS']: | ||
message = 'ERROR: Model status is {}'.format(model_status) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think that you need to add a step to delete model here. And also a return the message step.
google/cloud/forseti/services/cli.py
Outdated
@@ -728,6 +735,13 @@ def do_get_configuration(): | |||
"""Get the configuration of the server.""" | |||
output.write(client.get_server_configuration()) | |||
|
|||
def do_server_run(): | |||
"""Run the Forseti server, end-to-end""" | |||
# for message in client.server_run(): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Do you need these codes to be commented out? If not, remove.
@kevensen please test this functionally before merging, thanks. |
Will do |
Resolves #3342