New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
I can not save the LLM's config (e.g. MAX_TOKEN) in workflow after publish 【v0.6.3】 #3536
Comments
To address the issue of not being able to save the LLM's MAX_TOKEN configuration in Dify version 0.6.3, it's important to first ensure that the configuration changes are being made correctly within the system. The process involves navigating to The MAX_TOKEN setting is defined within the Dify configuration as follows, allowing a range from 1 to 8192: - name: max_tokens
use_template: max_tokens
default: 2048
min: 1
max: 8192 If you're encountering issues saving this setting, it could be related to changes in how Dify handles JSON and XML response formats in version 0.6.3. These changes might affect configuration saving if your setup relies on specific output formats for processing or storage. Unfortunately, without a specific error message or more detailed description of the behavior you're experiencing, it's challenging to provide a precise solution. Could you provide more details about the error you encounter when attempting to save the MAX_TOKEN setting? This information will be crucial for identifying the root cause and guiding you towards a resolution. Sources
|
Please provide more information regarding this issue? Which LLM do you use for that node. A screenshot is perferred. |
Gpt-4 turbo is a 128k context, and it is the same situation. I can only set it to 4096. This is restricted by Dify or OpenAI. |
@upseem Already answered |
In my workflow, I have 3 LLM nodes, |
@crazywoola I have recorded a video for you. It seems to always restore to the default LLM setting. 2024-04-18.09-20-34.mp4 |
I encountered the same problem today |
Self Checks
Dify version
0.6.3
Cloud or Self Hosted
Self Hosted (Docker)
Steps to reproduce
I can not save the LLM's config (e.g. MAX_TOKEN) in workflow after publish 【v0.6.3】
✔️ Expected Behavior
fix bug
❌ Actual Behavior
No response
The text was updated successfully, but these errors were encountered: