fix(dotAI): Dot AI LangChain4J - ProviderConfig fixes#35426
fix(dotAI): Dot AI LangChain4J - ProviderConfig fixes#35426ihoffmann-dot wants to merge 22 commits intomainfrom
Conversation
…credential merge on save
…s from getConfig response
|
Claude finished @ihoffmann-dot's task in 2m 44s —— View job PR Review
Issues1.
2. The method iterates over 3. Streaming runtime failures don't attempt fallback —
4. Failed model instances stay in cache after runtime errors — When 5. The 6. Reasoning-model regex is brittle — if (model.matches("o[0-9].*")) {
7.
The REST contract removal flagged in the previous comment (removing |
|
Pull Request Unsafe to Rollback!!!
- map.put(AppKeys.ROLE_PROMPT.key, appConfig.getRolePrompt());
- map.put(AppKeys.TEXT_PROMPT.key, appConfig.getTextPrompt());
- map.put(AppKeys.IMAGE_PROMPT.key, appConfig.getImagePrompt());
- map.put(AppKeys.IMAGE_SIZE.key, appConfig.getImageSize());
- map.put(AppKeys.LISTENER_INDEXER.key, appConfig.getListenerIndexer());The Postman test in
// Transitional — emit legacy keys from providerConfig values for one release
map.put(AppKeys.ROLE_PROMPT.key, appConfig.getRolePrompt()); |
| Logger.warn(LangChain4jAIClient.class, | ||
| "Streaming model '" + modelName + "' init failed: " + cause.getMessage() | ||
| + (models.size() > 1 ? " — trying next model" : "")); | ||
| continue; |
There was a problem hiding this comment.
this continue and return make harder the reading
also you should introduce some observability here, logs are ok but are not telling anything about how long it is take, etc
also you are using system prompt that never change, probably want to introduce a cache entry point to save tokens
There was a problem hiding this comment.
Good catch, I extracted the logic into initStreamingModel() and added timing logs. Regarding the cache, is a good observation but it would need some logic that we could implement in another PR.
| for (final String modelName : models) { | ||
| try { | ||
| final ProviderConfig modelConfig = ImmutableProviderConfig.copyOf(baseConfig).withModel(modelName); | ||
| final M model = modelCache.get( |
There was a problem hiding this comment.
for instance this method is easier to read
| */ | ||
| default List<String> allModels() { | ||
| final String m = model(); | ||
| if (m == null || m.isBlank()) { |
There was a problem hiding this comment.
usually if something may be null, Optional is a fantastic choice
PRs linked to this issue |
|
|
||
| @Test |
There was a problem hiding this comment.
I suggest adding javadoc to tests. We generally use this format:
/**
* Method to test: {@link StoryBlockAPI#refreshReferences(Contentlet)}
* Given Scenario: This will create 2 block contents, adds a rich content to each block content and retrieve the json.
* ExpectedResult: The new json will contain the rich text data map for each block content.
*/
…MS#35445) ## Summary Replaces the generic Apps UI textarea for `providerConfig` with a dedicated Angular screen that shows the current config and an example JSON side-by-side. Also ports backend fixes from dotCMS#35426 and adds a `PUT /api/v1/ai/completions/config` endpoint with credential-preserving merge. - New `DotAiConfigDetailComponent`: two-column layout — editable textarea on the left, formatted example JSON on the right - `DotAiConfigDetailResolver`: dedicated resolver that hardcodes `dotAI` as `appKey`, fixing the 404 caused by the generic resolver reading `null` from route params - Route `dotAI/edit/:id` added before the generic `:appKey/edit/:id` so dotAI navigates to the custom screen - `ChangeDetectorRef.detectChanges()` after async config load to fix textarea not rendering value until user interaction - `PUT /api/v1/ai/completions/config` (admin-only): saves `providerConfig` JSON; `ProviderConfigMerger` preserves stored credentials when the payload contains `*****` sentinel values - `dotAI.yml` description updated to reference OpenAI/ChatGPT directly - Ported from dotCMS#35426: flush SSE chunks, `cancelled` flag on `IOException`, `maxRetries` warn for streaming, null check in `parseSection`, `deepCopy` in `injectApiKeyIntoSections`, `maxTokens` → `max_completion_tokens` routing for OpenAI o-series models, PR review refactors ## Configuration ```json { "chat": { "provider": "openai", "apiKey": "sk-...", "model": "gpt-4o", "maxTokens": 16384, "temperature": 1.0, "maxRetries": 3, "rolePrompt": "You are dotCMSbot, an AI assistant to help content creators.", "textPrompt": "Use Descriptive writing style." }, "embeddings": { "provider": "openai", "apiKey": "sk-...", "model": "text-embedding-ada-002", "listenerIndexer": { "default": "blog,news,webPageContent" } }, "image": { "provider": "openai", "apiKey": "sk-...", "model": "dall-e-3", "size": "1792x1024", "imagePrompt": "Use 16:9 aspect ratio." } } ``` ## Notes - The custom UI is accessed via `#/apps/dotAI/edit/:siteId`; the "App screen" link in `render.jsp` already points to this URL - Credential masking: fields with `*****` in a PUT payload are replaced with the stored values before saving, so partial edits don't wipe secrets - `providerConfig` is not required — omitting it disables the AI features gracefully ## Related Issues - [feat(dotAI): Dot AI LangChain4J integration (First provider/OpenAI) dotCMS#35150](dotCMS#35150) - [fix(dotAI): Dot AI LangChain4J - ProviderConfig fixes dotCMS#35426](dotCMS#35426)
Summary
Consolidates AI configuration by moving prompts and per-capability settings into the
providerConfigJSON sections, enabling a single-field setup in the Apps UI. Also adds comma-separated model fallback so any provider can specify multiple models in priority order.providerConfigvisible in the Apps UI (hidden: false) and remove legacy per-field secrets (rolePrompt,textPrompt,imagePrompt,imageSize,listenerIndexer) fromdotAI.ymlAppConfignow reads prompts and settings from inside theproviderConfigJSON sections instead of from separate app secretsProviderConfig.allModels(): parse comma-separatedmodelfield into a priority list for fallbackConfiguration
{ "chat": { "provider": "openai", "apiKey": "sk-...", "model": "gpt-4o,gpt-4o-mini", "maxTokens": 16384, "temperature": 1.0, "rolePrompt": "You are dotCMSbot, an AI assistant to help content creators.", "textPrompt": "Use Descriptive writing style." }, "embeddings": { "provider": "openai", "apiKey": "sk-...", "model": "text-embedding-ada-002", "listenerIndexer": { "default": "blog,news,webPageContent" } }, "image": { "provider": "openai", "apiKey": "sk-...", "model": "dall-e-3", "size": "1792x1024", "imagePrompt": "Use 16:9 aspect ratio." } }Notes
rolePrompt,textPrompt,imagePrompt) and embedding settings (listenerIndexer) now live insideproviderConfigsections. Separate App secrets for these fields are no longer supported."model": "gpt-4o,gpt-4o-mini"to try models in order. Sync requests fall back on both initialization and runtime failures; streaming falls back on initialization failures only (cannot retry once bytes are written to the output stream).Related Issue
feat(dotAI): Dot AI LangChain4J integration (First provider/OpenAI)
#35150