-
Notifications
You must be signed in to change notification settings - Fork 1.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Assist] Add in SSH context Assist endpoints #30319
Changes from all commits
32bfb98
7b77aa8
0c5bb1e
7f96513
608d3be
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
Original file line number | Diff line number | Diff line change | ||
---|---|---|---|---|
@@ -0,0 +1,80 @@ | ||||
/* | ||||
* Copyright 2023 Gravitational, Inc. | ||||
* | ||||
* Licensed under the Apache License, Version 2.0 (the "License"); | ||||
* you may not use this file except in compliance with the License. | ||||
* You may obtain a copy of the License at | ||||
* | ||||
* http://www.apache.org/licenses/LICENSE-2.0 | ||||
* | ||||
* Unless required by applicable law or agreed to in writing, software | ||||
* distributed under the License is distributed on an "AS IS" BASIS, | ||||
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. | ||||
* See the License for the specific language governing permissions and | ||||
* limitations under the License. | ||||
*/ | ||||
|
||||
package model | ||||
|
||||
import ( | ||||
"context" | ||||
"fmt" | ||||
|
||||
"github.com/gravitational/trace" | ||||
) | ||||
|
||||
type commandGenerationTool struct{} | ||||
|
||||
type commandGenerationToolInput struct { | ||||
// Command is a unix command to execute. | ||||
Command string `json:"command"` | ||||
} | ||||
|
||||
func (c *commandGenerationTool) Name() string { | ||||
return "Command Generation" | ||||
} | ||||
|
||||
func (c *commandGenerationTool) Description() string { | ||||
// acknowledgement field is used to convince the LLM to return the JSON. | ||||
// Base on my testing LLM ignores the JSON when the schema has only one field. | ||||
// Adding additional "pseudo-fields" to the schema makes the LLM return the JSON. | ||||
return fmt.Sprintf(`Generate a Bash command. | ||||
The input must be a JSON object with the following schema: | ||||
%vjson | ||||
{ | ||||
"command": string, \\ The generated command | ||||
"acknowledgement": boolean \\ Set to true to ackowledge that you understand the formatting | ||||
} | ||||
Comment on lines
+44
to
+47
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. I'm not too familiar with the structure of Assist but could this generate markdown instead? We could stream it to the UI quickly if that was the case, with the suggested command going in between three backticks There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. OpenAI sends the markdown back most of the time. teleport/lib/ai/model/prompt.go Line 104 in 8895915
|
||||
%v | ||||
`, "```", "```") | ||||
} | ||||
|
||||
func (c *commandGenerationTool) Run(_ context.Context, _ string) (string, error) { | ||||
// This is stubbed because commandGenerationTool is handled specially. | ||||
// This is because execution of this tool breaks the loop and returns a command suggestion to the user. | ||||
// It is still handled as a tool because testing has shown that the LLM behaves better when it is treated as a tool. | ||||
// | ||||
// In addition, treating it as a Tool interface item simplifies the display and prompt assembly logic significantly. | ||||
return "", trace.NotImplemented("not implemented") | ||||
} | ||||
|
||||
// parseInput is called in a special case if the planned tool is commandExecutionTool. | ||||
// This is because commandExecutionTool is handled differently from most other tools and forcibly terminates the thought loop. | ||||
func (*commandGenerationTool) parseInput(input string) (*commandGenerationToolInput, error) { | ||||
output, err := parseJSONFromModel[commandGenerationToolInput](input) | ||||
if err != nil { | ||||
return nil, err | ||||
} | ||||
|
||||
if output.Command == "" { | ||||
return nil, &invalidOutputError{ | ||||
coarse: "command generation: missing command", | ||||
detail: "command must be non-empty", | ||||
} | ||||
} | ||||
|
||||
// Ignore the acknowledgement field. | ||||
// We do not care about the value. Having the command it enough. | ||||
|
||||
return &output, nil | ||||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Should we specify something other than bash here? Maybe we could feed the prompt the OS (if we have that available during an SSH session?)
If not, something more generic would avoid the assumption the user's shell is always bash
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think this is something that we could explore "when needed". WebUI doesn't know what OS the client is using. I'm also not sure how good OpenAI is at generating commands for different terminals. I think that the bottom line, 98% of the syntax is the same between bash, sh, zsh, fish or whatever you're using. The differences are mainly when you write a script which we don't do.