-
Notifications
You must be signed in to change notification settings - Fork 66
Conversation
docs/10-clusters.md
Outdated
### Run a command on a specific node in the cluster | ||
To run a command on all nodes in the cluster, run: | ||
```sh | ||
aztk spark cluster node-run --id <your_cluster_id> --node-id <your_node_id> "<command>" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
exec or node-exec? Stays more in line with docker exec
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There is already a cluster run
so I figured cluster node-run
made the most sense here.
|
||
|
||
def print_execute_result(node_id, result): | ||
print("-" * (len(node_id) + 6)) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
should use the log here i think
aztk/client.py
Outdated
@@ -225,6 +225,32 @@ def __delete_user_on_pool(self, username, pool_id, nodes): | |||
futures = [exector.submit(self.__delete_user, pool_id, node.id, username) for node in nodes] | |||
concurrent.futures.wait(futures) | |||
|
|||
def __node_run(self, cluster_id, node_id, command, internal, container_name=None, timeout=None): | |||
pool, nodes = self.__get_pool_details(cluster_id) | |||
node = [node for node in nodes if node.id == node_id] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
would use next(node for node in nodes if node.id == node_id)
here instead of this and the if below https://stackoverflow.com/questions/9542738/python-find-in-list
aztk/client.py
Outdated
ssh_key = RSA.generate(2048) | ||
ssh_pub_key = ssh_key.publickey().exportKey('OpenSSH').decode('utf-8') | ||
try: | ||
self.__create_user_on_node('aztk', pool.id, node.id, ssh_pub_key) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
shouldn't this user have some kind of random id so it doesn't conflict
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
maybe, but if the user already exists, then it is just deleted and recreated. So the only potential issue is if someone adds aztk
as a user, and then runs a command. In that case, their created user would be deleted, so adding a randomly generated user id would probably be best here.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
or running node-run twice at the same time. Not sure what deleting the user while being login does
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I believe nothing would happen while the session is open. However, still, making the user id generated would solve that.
On second thought, for this PR, maybe the CLI should just be |
Thought this, the problem is the same we talked about earlier where you might make a mistake and run on all nodes instead of a single one. Is it however a big deal? |
aztk/client.py
Outdated
@@ -4,16 +4,19 @@ | |||
|
|||
import azure.batch.models as batch_models | |||
import azure.batch.models.batch_error as batch_error | |||
from azure.batch.models import batch_error as batch_error |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
is this import the same as above?
log.info("%s\n", result) | ||
else: | ||
for line in result: | ||
print(line) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
still a print here?
Fix #571