-3. **Flags:** To gauge the complexity of responses, I moved away from using a simple rating scale and instead began detecting elements like sarcasm, humor, or complex topics. The model responds with "Yes" or "No" to indicate the presence of these elements, and I count the number of "Yes" answers to determine if a more complex reply is needed. This approach proved to be both simple and stable. In general, it's best to **keep as much of the logic as possible on the client side, rather than relying on the LLM response**. See [this template](https://github.com/GreenWizard2015/AIEnhancedTranslator/blob/fd7bdd567100f09050ac13431032e682db0a92be/data/translate_shallow.txt) for more details.
0 commit comments