Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
16 changes: 9 additions & 7 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -50,24 +50,26 @@ Fun Fact: Using an AI to write commits and other automations can reduce the risk

Before running AutoCommit, it's advisable to set a few environment variables 🔑:

- `OPENAI_URL`: Override openai api eg: azure openai (Optional; Default: openai url)
- `OPENAI_API_KEY`: The API key for the GPT-4 model (🚨 **Required**).
- `OPENAI_MODEL`: Specify a different language model 🔄 (Optional; Default: `gpt-4`).
- `AZURE_OPENAI_ENDPOINT`: Azure OpenAI endpoint URL (🚨 **Required** for Azure OpenAI).
- `AZURE_OPENAI_API_KEY`: The API key for Azure OpenAI (🚨 **Required** for Azure OpenAI).
- `OPENAI_URL`: Override openai api eg: azure openai (Optional; Fallback for backward compatibility)
- `OPENAI_API_KEY`: The API key for the OpenAI model (Optional; Fallback for backward compatibility).
- `OPENAI_MODEL`: Specify a different language model 🔄 (Optional; Default: `o4-mini`).
- `FINE_TUNE_PARAMS`: Additional parameters for fine-tuning the model output ⚙️ (Optional; Default: `{}`).

Add these environment variables by appending them to your `.bashrc`, `.zshrc`, or other shell configuration files 📄:

```bash
export OPENAI_URL=https://apiendpoint.openai.azure.com
export OPENAI_MODEL=llm-large
export OPENAI_API_KEY=your-openai-api-key-here
export AZURE_OPENAI_ENDPOINT=https://your-resource.openai.azure.com
export AZURE_OPENAI_API_KEY=your-azure-openai-api-key-here
export OPENAI_MODEL=o4-mini
export FINE_TUNE_PARAMS='{"temperature": 0.7}'
```

Or, you can set them inline before running the AutoCommit command 🖱️:

```bash
OPENAI_URL=your-openai-api-key-here OPENAI_MODEL=gpt-4 FINE_TUNE_PARAMS='{"temperature": 0.7}' git auto-commit
AZURE_OPENAI_ENDPOINT=https://your-resource.openai.azure.com AZURE_OPENAI_API_KEY=your-api-key OPENAI_MODEL=o4-mini git auto-commit
```

### Complete Install 📦
Expand Down
55 changes: 39 additions & 16 deletions util.go
Original file line number Diff line number Diff line change
Expand Up @@ -7,7 +7,6 @@ import (
"github.com/Azure/azure-sdk-for-go/sdk/azcore/to"
"github.com/cli/go-gh/v2/pkg/api"
"github.com/joho/godotenv"
openai "github.com/sashabaranov/go-openai"
"math"
"os"
"os/exec"
Expand Down Expand Up @@ -108,36 +107,60 @@ func getChatCompletionResponse(messages []azopenai.ChatMessage) (string, error)
if err != nil {
fmt.Errorf(".env file not found: %v", err)
}
keyCredential, err := azopenai.NewKeyCredential(os.Getenv("OPENAI_API_KEY"))

// Try new environment variables first, fall back to old ones for compatibility
apiKey := os.Getenv("AZURE_OPENAI_API_KEY")
if apiKey == "" {
apiKey = os.Getenv("OPENAI_API_KEY")
}

endpoint := os.Getenv("AZURE_OPENAI_ENDPOINT")
if endpoint == "" {
endpoint = os.Getenv("OPENAI_URL")
}

deploymentName := os.Getenv("OPENAI_MODEL")
if deploymentName == "" {
deploymentName = "o4-mini"
}

// Set reasonable token limit for completion
maxTokens := int32(1000)

if apiKey == "" {
return "", fmt.Errorf("export AZURE_OPENAI_API_KEY=<api_key> #execute this in your terminal and try again")
}

if endpoint == "" {
return "", fmt.Errorf("export AZURE_OPENAI_ENDPOINT=<endpoint> #execute this in your terminal and try again")
}

// Initialize the OpenAI client with API key-based authentication
keyCredential, err := azopenai.NewKeyCredential(apiKey)
if err != nil {
fmt.Errorf("export OPENAI_API_KEY=<api_key> #execute this in your terminal and try again")
return "", fmt.Errorf("error creating Azure OpenAI client: %v", err)
return "", fmt.Errorf("error creating credential: %v", err)
}
url := os.Getenv("OPENAI_URL")
model := os.Getenv("OPENAI_MODEL")

var client *azopenai.Client

if strings.Contains(url, "azure") {
client, err = azopenai.NewClientWithKeyCredential(url, keyCredential, nil)
if strings.Contains(endpoint, "azure") {
client, err = azopenai.NewClientWithKeyCredential(endpoint, keyCredential, nil)
if err != nil {
return "", fmt.Errorf("error creating Azure OpenAI client: %v", err)
}
} else {
client, err = azopenai.NewClientForOpenAI(url, keyCredential, nil)
client, err = azopenai.NewClientForOpenAI(endpoint, keyCredential, nil)
if err != nil {
return "", fmt.Errorf("error creating Azure OpenAI client: %v", err)
return "", fmt.Errorf("error creating OpenAI client: %v", err)
}

}
if model == "" {
model = openai.GPT4
}

resp, err := client.GetChatCompletions(
context.Background(),
context.TODO(),
azopenai.ChatCompletionsOptions{
Messages: messages,
Deployment: model,
Deployment: deploymentName,
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

the option should be as follows
// Make the chat completion request
resp, err := client.GetChatCompletions(context.TODO(), azopenai.ChatCompletionsOptions{
Messages: messages,
DeploymentName: &deploymentName,
MaxCompletionTokens: &maxTokens,
}, nil)

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Fixed the Azure OpenAI chat completions options structure. Updated to use MaxTokens: &maxTokens (set to 1000 tokens) and changed context to context.TODO() as suggested. The Deployment field name was correct according to the Azure SDK documentation. Changes committed in c06d10b.

MaxTokens: &maxTokens,
},
nil,
)
Expand Down