Skip to content

Commit 51c7994

Browse files
update before publishing
1 parent 3fce97b commit 51c7994

File tree

5 files changed

+79
-12
lines changed

5 files changed

+79
-12
lines changed

.env

Lines changed: 2 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1 +1,2 @@
1-
OPENAI_API_KEY=key
1+
# rename this file to .env.local and fill in the values
2+
OPENAI_API_KEY=your OpenAI API key

.github/FUNDING.yml

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1 @@
1+
patreon: GreenWizard

README.md

Lines changed: 60 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -1 +1,61 @@
11
# AI Enhanced Translator
2+
3+
The **AI Enhanced Translator** is a powerful tool that harnesses the capabilities of artificial intelligence to provide advanced translation services. This tool is designed to streamline and enhance the translation process by leveraging the OpenAI GPT-3.5 Turbo model.
4+
5+
**Table of Contents**
6+
- [AI Enhanced Translator](#ai-enhanced-translator)
7+
- [Key Features](#key-features)
8+
- [Quick Start](#quick-start)
9+
- [Usage](#usage)
10+
- [Contributing](#contributing)
11+
- [Some prompt engineering takeaways](#some-prompt-engineering-takeaways)
12+
13+
## Key Features
14+
15+
- **High-Quality Translation**: The AI Enhanced Translator offers top-notch translations that excel in handling informal language, humor, jargon, and complex text.
16+
- **Multilingual Support**: It supports a wide range of languages, making it a versatile tool for various translation needs.
17+
- **Adaptive Writing Styles**: The translator adapts to different writing styles, ensuring that the translated text maintains the intended tone and context.
18+
- **Cultural Nuance Consideration**: It takes cultural nuances into account, resulting in translations that are culturally sensitive and accurate.
19+
20+
## Quick Start
21+
22+
To use the AI Enhanced Translator:
23+
24+
1. Ensure you have Python 3.x installed (tested on 3.9 and 3.10).
25+
2. Install the required dependencies using `pip install -r requirements.txt`.
26+
3. Obtain an OpenAI API key by [signing up](https://platform.openai.com/) and creating personal API key.
27+
4. Set your API key:
28+
- As an environment variable named `OPENAI_API_KEY`.
29+
- In the `.env.local` file.
30+
- In the UI by clicking the `Switch API Key` button.
31+
32+
## Usage
33+
34+
1. Input your text into the provided text field.
35+
2. Quick translation using Google Translate will be periodically updated as you type.
36+
3. For a more refined translation using the AI model, press `Ctrl+Enter`.
37+
38+
Please note that the AI-powered translation may take a bit longer but offers significantly improved quality.
39+
40+
## Contributing
41+
42+
We welcome contributions to the AI Enhanced Translator project. If you'd like to contribute, please follow these steps:
43+
44+
1. Fork the repository.
45+
2. Create a new branch for your feature or bug fix.
46+
3. Make your changes and commit them with clear and descriptive messages.
47+
4. Push your changes to your fork.
48+
5. Submit a pull request to the main repository.
49+
50+
Your contributions will help make this tool even more valuable for users worldwide. Thank you for your support!
51+
52+
## Some prompt engineering takeaways
53+
54+
Here are the main lessons I've learned from my first experience with the API ChatGPT. I aimed to keep things simple and straightforward, and here's what I discovered:
55+
56+
1. **One Prompt, One Task:** When using ChatGPT, it's best to stick to one task per prompt. While it may be tempting to create branching prompt with different outcomes, this approach can lead to instability in the responses. It's generally more reliable to keep each prompt focused on a single task or question.
57+
2. **Custom Response Format:** While many recommend enforcing ChatGPT responses in JSON format, I found this approach to be somewhat cumbersome. Instead, I developed my own response format that takes into account the nuances of how the language model works. Simplicity and clarity in the response format can make working with ChatGPT more straightforward.
58+
3. **Flags:** To gauge the complexity of responses, I moved away from using a simple rating scale and instead began detecting elements like sarcasm, humor, or complex topics. The model responds with "Yes" or "No" to indicate the presence of these elements, and I count the number of "Yes" answers to determine if a more complex reply is needed. This approach proved to be both simple and stable. In general, it's best to **keep as much of the logic as possible on the client side, rather than relying on the LLM response**. See [this template](https://github.com/GreenWizard2015/AIEnhancedTranslator/blob/fd7bdd567100f09050ac13431032e682db0a92be/data/translate_shallow.txt) for more details.
59+
4. **Explicit Requests:** Sometimes, it's not enough to ask for something in a general way, like "issues with ...: {list}". To get more precise responses, it can be helpful to request a list, specify the minimum number of elements, and explicitly begin the list while describing the meaning of its elements. Providing clear context in your prompts can lead to more accurate and relevant responses. See [this template](https://github.com/GreenWizard2015/AIEnhancedTranslator/blob/fd7bdd567100f09050ac13431032e682db0a92be/data/translate_deep.txt#L8-L9) for more details.
60+
5. **Translation of the notifications:** It's interesting that you can request a translation of some messages for UI directly in the prompt. This is extremely unusual for me as a programmer. See [this template](https://github.com/GreenWizard2015/AIEnhancedTranslator/blob/e1c0975202e926e339ee10766810f26d710a2f4a/prompts/translate_shallow.txt#L14) for more details. Ultimately, I chose not to pursue this approach, because it's reducing the stability of the system. But it's a really interesting idea, in my opinion.
61+
6. **Prompt optimization:** After receiving the first results, I started optimizing the size and stability of the prompts. AI doesn't care about grammar and spelling, so we can shorten the prompt to the minimum necessary for stable text generation. This improves the stability of the output and reduces the cost of requests. However, I haven't shortened the critically important parts of the prompt. By the way, basic optimization can be done quite well with ChatGPT. I assume that the process of refining prompts can be automated without significant costs.

main.py

Lines changed: 11 additions & 11 deletions
Original file line numberDiff line numberDiff line change
@@ -4,19 +4,10 @@
44
from tkinter import ttk
55
from tkinter import simpledialog
66
import tkinter.scrolledtext as tkst
7-
import json
7+
import json, os, logging
88
from core.worker import CWorker
9-
# set up logging
10-
import logging
11-
logging.basicConfig(
12-
filename='debug.log', filemode='w',
13-
level=logging.INFO,
14-
format='%(asctime)s %(levelname)s %(message)s'
15-
)
16-
# set up environment variables
179
import dotenv
18-
dotenv.load_dotenv('.env')
19-
dotenv.load_dotenv('.env.local', override=True)
10+
2011
# main app
2112
# TODO: figure out how to enforce deep translation via UI
2213
# TODO: add translation history
@@ -221,6 +212,15 @@ def configs(self): return self._configs
221212
# End of class
222213

223214
def main():
215+
# set up logging
216+
logging.basicConfig(
217+
filename='debug.log', filemode='w',
218+
level=logging.INFO,
219+
format='%(asctime)s %(levelname)s %(message)s'
220+
)
221+
# set up environment variables
222+
if os.path.exists('.env.local'): dotenv.load_dotenv('.env.local', override=True)
223+
224224
# load languages from data/languages.json
225225
with open('data/languages.json', 'r') as f: languages = json.load(f)
226226
# load configs

requirements.txt

Lines changed: 5 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,5 @@
1+
googletrans==4.0.0rc1
2+
langchain
3+
openai
4+
python-dotenv
5+
tk

0 commit comments

Comments
 (0)