Skip to content

release: investoday industry 2.0.0#1356

Merged
crazywoola merged 1 commit intolanggenius:mainfrom
kenneth-bro:feat/industry
Oct 13, 2025
Merged

release: investoday industry 2.0.0#1356
crazywoola merged 1 commit intolanggenius:mainfrom
kenneth-bro:feat/industry

Conversation

@kenneth-bro
Copy link
Copy Markdown
Contributor

Plugin Submission Form

1. Metadata

2. Submission Type

  • New plugin submission
  • Version update for existing plugin

3. Description

4. Checklist

  • I have read and followed the Publish to Dify Marketplace guidelines
  • I have read and comply with the Plugin Developer Agreement
  • I confirm my plugin works properly on both Dify Community Edition and Cloud Version
  • I confirm my plugin has been thoroughly tested for completeness and functionality
  • My plugin brings new value to Dify

5. Documentation Checklist

Please confirm that your plugin README includes all necessary information:

  • Step-by-step setup instructions
  • Detailed usage instructions
  • All required APIs and credentials are clearly listed
  • Connection requirements and configuration details
  • Link to the repository for the plugin source code

6. Privacy Protection Information

Based on Dify Plugin Privacy Protection Guidelines:

Data Collection

Privacy Policy

  • I confirm that I have prepared and included a privacy policy in my plugin package based on the Plugin Privacy Protection Guidelines

@kenneth-bro kenneth-bro changed the title release: 2.0.0 release: investoday industry 2.0.0 Oct 10, 2025
@crazywoola crazywoola merged commit a54615b into langgenius:main Oct 13, 2025
2 checks passed
Gmasterzhangxinyang pushed a commit to Gmasterzhangxinyang/dify-plugins that referenced this pull request Apr 6, 2026
…us#1356)

* chore(dependencies): update dependencies and project configuration

- Added uv.lock to .gitignore for dependency management
- Updated README.md with uv pip compile command
- Fixed code formatting in multiple files
- Added test cases for content to parts conversion
- Improved error handling in GoogleProvider
- Updated requirements.txt and pyproject.toml for dependency management
- Added pyproject.toml and .python-version for project setup

* fix(gemini): include thoughts in response when requested

- Added new parameter `include_thoughts` to Gemini 2.5 model YAML definitions (flash-lite, flash, pro).
- Introduced `thinking_mode` toggle for explicit control over reasoning activation.
- Updated `llm.py` to correctly map `include_thoughts`, `thinking_mode`, and `thinking_budget` into the Google GenAI `ThinkingConfig`.
- Removed legacy thought-tag wrapping logic; thoughts are now returned verbatim when enabled.
- Adjusted dependency version ranges in `pyproject.toml` to avoid breaking upgrades.
- Bumped plugin version to 0.3.0 to reflect new features.

These changes ensure users can optionally receive model reasoning content without causing 400 errors on unsupported model variants. The new parameters are backward-compatible and
clearly documented.

* fix(gemini): include thinking content in streamed responses

- Adds explicit handling for Gemini 2.5 "thinking" tokens so they are surfaced to the caller.
- Wraps thinking text with `<think>…</think>` tags when `part.thought` is detected.
- Ensures the closing tag is emitted even if the stream ends while still in thinking mode.
- Updates parameter help text in all three Gemini 2.5 model YAML files to clarify that enabling thinking without a budget triggers dynamic reasoning.
- Removes stray debug `print(part)` statement.
- Introduces `is_thinking` state flag to track open/close boundaries correctly across chunks.

These changes make the model’s reasoning visible to users and downstream UIs, matching the expected behavior referenced in the branch name `fix-include-thoughts`.

No breaking changes; existing integrations will simply start receiving the additional tagged content when thinking is enabled.

* fix(models): add missing include_thoughts parameter to Gemini 2.5 model configs

- Introduced `include_thoughts` boolean parameter (default false) to control whether the model returns its reasoning chain.
- Added `thinking_mode` boolean parameter (default true) to explicitly enable/disable thinking.
- Updated parameter descriptions for `thinking_budget` to clarify it only applies when thinking mode is enabled.
- Unified `max_output_tokens` default to 65536 across all Gemini 2.5 variants.
- Added missing `grounding` parameter to several model configs.
- Reordered features list to keep consistent ordering.

* Update README.md

* Update llm.py

* ff
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants