-
Notifications
You must be signed in to change notification settings - Fork 205
Add head node support for SSH fleets #2292
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
* Add `proxy_jump` property * Store configuration as part of `RemoteConnectionInfo` * Always use a project key to connect to SSH instances, drop backward compatibility code (previously, the user-provided key was used, as the project key was not added to the SSH instance, this was fixed in #1716) NOTE: services are not currently supported, proxy support will be added in a separate PR. Part-of: #2010
a52d504 to
195c37e
Compare
| ssh_user: str, | ||
| ssh_keys: List[SSHKey], | ||
| ssh_proxy: Optional[SSHConnectionParams], | ||
| ssh_proxy_keys: list[SSHKey], |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I guess ssh_proxy_keys should be Optional?
| To be able to attach to runs, both explicitly with `dstack attach` and implicitly with `dstack apply`, you must either | ||
| add a front node key (`~/.ssh/head_node_key`) to an SSH agent or configure a key path in `~/.ssh/config`: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't understand. Why do we require the user to configure a key path in ~/.ssh/config if user specifies identity_file: ~/.ssh/head_node_key? Can't we just use it to connect to the head node?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It can be two different users, one creating a fleet has the key, but other users who run workloads may not have the key. With regular setup, the key problem is solved via shim -- shim keeps user's key on the instance while a run is running, but on the head node we don't have any dstack agent to manage authorized keys, thus we require that each user has the head node key on their machine preconfigured. We could download the key to a user's machine though, but I'm not sure if this is a good idea.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'd put this comment somewhere in the code.
* Add `proxy_jump` property * Store configuration as part of `RemoteConnectionInfo` * Always use a project key to connect to SSH instances, drop backward compatibility code (previously, the user-provided key was used, as the project key was not added to the SSH instance, this was fixed in dstackai#1716) Part-of: dstackai#2010 Co-authored-by: Victor Skvortsov <vds003@gmail.com>
* Add `proxy_jump` property * Store configuration as part of `RemoteConnectionInfo` * Always use a project key to connect to SSH instances, drop backward compatibility code (previously, the user-provided key was used, as the project key was not added to the SSH instance, this was fixed in dstackai#1716) Part-of: dstackai#2010 Co-authored-by: Victor Skvortsov <vds003@gmail.com>
proxy_jumppropertyRemoteConnectionInfoNOTE: services are not currently supported, proxy support will be added in a separate PR.
Part-of: #2010