Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Image generation returns 500 with LocalAI #91

Open
ewook opened this issue Apr 19, 2024 · 3 comments
Open

Image generation returns 500 with LocalAI #91

ewook opened this issue Apr 19, 2024 · 3 comments
Labels
bug Something isn't working

Comments

@ewook
Copy link

ewook commented Apr 19, 2024

Which version of integration_openai are you using?

2.0.0

Which version of Nextcloud are you using?

28.0.4

Which browser are you using? In case you are using the phone App, specify the Android or iOS version and device please.

Firefox 124.0.2

Describe the Bug

Running a question via Nextcloud Assistant results in a error while trying to use LocalAI. Text parts works and returns output. Running against LocalAI container localai/localai:latest-aio-gpu-nvidia-cuda-11. Results from localAI is 200, and a link. Output from localai posted below:

´´´´
api_1 | 2:01PM DBG Request received: {"model":"","language":"","n":0,"top_p":null,"top_k":null,"temperature":null,"max_tokens":null,"echo":false,"batch":0,"ignore_eos":false,"repeat_penalty":0,"n_keep":0,"frequency_penalty":0,"presence_penalty":0,"tfz":null,"typical_p":null,"seed":null,"negative_prompt":"","rope_freq_base":0,"rope_freq_scale":0,"negative_prompt_scale":0,"use_fast_tokenizer":false,"clip_skip":0,"tokenizer":"","file":"","response_format":{},"size":"1024x1024","prompt":"donkey","instruction":"","input":null,"stop":null,"messages":null,"functions":null,"function_call":null,"stream":false,"mode":0,"step":0,"grammar":"","grammar_json_functions":null,"backend":"","model_base_name":""}
api_1 | 2:01PM DBG Loading model: stablediffusion
api_1 | 2:01PM DBG Parameter Config: &{PredictionOptions:{Model:DreamShaper_8_pruned.safetensors Language: N:0 TopP:0xc00027af68 TopK:0xc00027af70 Temperature:0xc00027af78 Maxtokens:0xc00027af80 Echo:false Batch:0 IgnoreEOS:false RepeatPenalty:0 Keep:0 FrequencyPenalty:0 PresencePenalty:0 TFZ:0xc00027afa8 TypicalP:0xc00027afa0 Seed:0xc00027afc0 NegativePrompt: RopeFreqBase:0 RopeFreqScale:0 NegativePromptScale:0 UseFastTokenizer:false ClipSkip:0 Tokenizer:} Name:stablediffusion F16:0xc00027aee5 Threads:0xc00027af58 Debug:0xc000636808 Roles:map[] Embeddings:false Backend:diffusers TemplateConfig:{Chat: ChatMessage: Completion: Edit: Functions:} PromptStrings:[donkey] InputStrings:[] InputToken:[] functionCallString: functionCallNameString: FunctionsConfig:{DisableNoAction:false NoActionFunctionName: NoActionDescriptionName: ParallelCalls:false} FeatureFlag:map[] LLMConfig:{SystemPrompt: TensorSplit: MainGPU: RMSNormEps:0 NGQA:0 PromptCachePath: PromptCacheAll:false PromptCacheRO:false MirostatETA:0xc00027af98 MirostatTAU:0xc00027af90 Mirostat:0xc00027af88 NGPULayers:0xc00027afb0 MMap:0xc00027afb8 MMlock:0xc00027afb9 LowVRAM:0xc00027afb9 Grammar: StopWords:[] Cutstrings:[] TrimSpace:[] TrimSuffix:[] ContextSize:0xc00027af50 NUMA:false LoraAdapter: LoraBase: LoraScale:0 NoMulMatQ:false DraftModel: NDraft:0 Quantization: GPUMemoryUtilization:0 TrustRemoteCode:false EnforceEager:false SwapSpace:0 MaxModelLen:0 MMProj: RopeScaling: ModelType: YarnExtFactor:0 YarnAttnFactor:0 YarnBetaFast:0 YarnBetaSlow:0} AutoGPTQ:{ModelBaseName: Device: Triton:false UseFastTokenizer:false} Diffusers:{CUDA:true PipelineType:StableDiffusionPipeline SchedulerType:k_dpmpp_2m EnableParameters:negative_prompt,num_inference_steps CFGScale:0 IMG2IMG:false ClipSkip:0 ClipModel: ClipSubFolder: ControlNet:} Step:25 GRPC:{Attempts:0 AttemptsSleepTime:0} VallE:{AudioPath:} CUDA:false DownloadFiles:[{Filename:DreamShaper_8_pruned.safetensors SHA256: URI:huggingface://Lykon/DreamShaper/DreamShaper_8_pruned.safetensors}] Description: Usage:curl http://localhost:8080/v1/images/generations
api_1 | -H "Content-Type: application/json"
api_1 | -d '{
api_1 | "prompt": "|",
api_1 | "step": 25,
api_1 | "size": "512x512"
api_1 | }'}
api_1 | 2:01PM INF Loading model 'DreamShaper_8_pruned.safetensors' with backend diffusers
api_1 | 2:01PM DBG Stopping all backends except 'DreamShaper_8_pruned.safetensors'
api_1 | 2:01PM DBG Model already loaded in memory: DreamShaper_8_pruned.safetensors
api_1 | [127.0.0.1]:59696 200 - GET /readyz
100%|������������������������������| 25/25 [00:41<00:00, 1.65s/it]0.0.1:35475): stderr
api_1 | 2:01PM DBG Response: {"created":1713535317,"id":"2d4c9092-ef63-4b09-acc7-bdbf365041df","data":[{"embedding":null,"index":0,"url":"http://localaihost:8080/generated-images/b641740348538.png"}],"usage":{"prompt_tokens":0,"completion_tokens":0,"total_tokens":0}}
api_1 | [192.168.18.18]:54944 200 - POST /v1/images/generations
´´´´´

Expected Behavior

A image based on the text input. LocalAI direct API call returns image. TextToImage function seems be out of order.

´´´´´
api_1 | 2:04PM DBG Request received: {"model":"","language":"","n":0,"top_p":null,"top_k":null,"temperature":null,"max_tokens":null,"echo":false,"batch":0,"ignore_eos":false,"repeat_penalty":0,"n_keep":0,"frequency_penalty":0,"presence_penalty":0,"tfz":null,"typical_p":null,"seed":null,"negative_prompt":"","rope_freq_base":0,"rope_freq_scale":0,"negative_prompt_scale":0,"use_fast_tokenizer":false,"clip_skip":0,"tokenizer":"","file":"","response_format":{},"size":"256x256","prompt":"A cute baby sea otter","instruction":"","input":null,"stop":null,"messages":null,"functions":null,"function_call":null,"stream":false,"mode":0,"step":0,"grammar":"","grammar_json_functions":null,"backend":"","model_base_name":""}
api_1 | 2:04PM DBG Loading model: stablediffusion
api_1 | 2:04PM DBG Parameter Config: &{PredictionOptions:{Model:DreamShaper_8_pruned.safetensors Language: N:0 TopP:0xc00027af68 TopK:0xc00027af70 Temperature:0xc00027af78 Maxtokens:0xc00027af80 Echo:false Batch:0 IgnoreEOS:false RepeatPenalty:0 Keep:0 FrequencyPenalty:0 PresencePenalty:0 TFZ:0xc00027afa8 TypicalP:0xc00027afa0 Seed:0xc00027afc0 NegativePrompt: RopeFreqBase:0 RopeFreqScale:0 NegativePromptScale:0 UseFastTokenizer:false ClipSkip:0 Tokenizer:} Name:stablediffusion F16:0xc00027aee5 Threads:0xc00027af58 Debug:0xc000336208 Roles:map[] Embeddings:false Backend:diffusers TemplateConfig:{Chat: ChatMessage: Completion: Edit: Functions:} PromptStrings:[A cute baby sea otter] InputStrings:[] InputToken:[] functionCallString: functionCallNameString: FunctionsConfig:{DisableNoAction:false NoActionFunctionName: NoActionDescriptionName: ParallelCalls:false} FeatureFlag:map[] LLMConfig:{SystemPrompt: TensorSplit: MainGPU: RMSNormEps:0 NGQA:0 PromptCachePath: PromptCacheAll:false PromptCacheRO:false MirostatETA:0xc00027af98 MirostatTAU:0xc00027af90 Mirostat:0xc00027af88 NGPULayers:0xc00027afb0 MMap:0xc00027afb8 MMlock:0xc00027afb9 LowVRAM:0xc00027afb9 Grammar: StopWords:[] Cutstrings:[] TrimSpace:[] TrimSuffix:[] ContextSize:0xc00027af50 NUMA:false LoraAdapter: LoraBase: LoraScale:0 NoMulMatQ:false DraftModel: NDraft:0 Quantization: GPUMemoryUtilization:0 TrustRemoteCode:false EnforceEager:false SwapSpace:0 MaxModelLen:0 MMProj: RopeScaling: ModelType: YarnExtFactor:0 YarnAttnFactor:0 YarnBetaFast:0 YarnBetaSlow:0} AutoGPTQ:{ModelBaseName: Device: Triton:false UseFastTokenizer:false} Diffusers:{CUDA:true PipelineType:StableDiffusionPipeline SchedulerType:k_dpmpp_2m EnableParameters:negative_prompt,num_inference_steps CFGScale:0 IMG2IMG:false ClipSkip:0 ClipModel: ClipSubFolder: ControlNet:} Step:25 GRPC:{Attempts:0 AttemptsSleepTime:0} VallE:{AudioPath:} CUDA:false DownloadFiles:[{Filename:DreamShaper_8_pruned.safetensors SHA256: URI:huggingface://Lykon/DreamShaper/DreamShaper_8_pruned.safetensors}] Description: Usage:curl http://localhost:8080/v1/images/generations
api_1 | -H "Content-Type: application/json"
api_1 | -d '{
api_1 | "prompt": "|",
api_1 | "step": 25,
api_1 | "size": "512x512"
api_1 | }'}
api_1 | 2:04PM INF Loading model 'DreamShaper_8_pruned.safetensors' with backend diffusers
api_1 | 2:04PM DBG Stopping all backends except 'DreamShaper_8_pruned.safetensors'
api_1 | 2:04PM DBG Model already loaded in memory: DreamShaper_8_pruned.safetensors
100%|������������������������������| 25/25 [00:41<00:00, 1.65s/it]0.0.1:35475): stderr
api_1 | 2:05PM DBG Response: {"created":1713535525,"id":"c3bcdb0d-75fd-44f9-9275-8dec7f3eef1a","data":[{"embedding":null,"index":0,"url":"http://localaihost:8080/generated-images/b64269558929.png"}],"usage":{"prompt_tokens":0,"completion_tokens":0,"total_tokens":0}}
api_1 | [192.168.18.226]:49364 200 - POST /v1/images/generations
´´´´´

To Reproduce

Install localai with "aio", or other, verify LocalAI Function. Install integration_openai and Nextcloud assistant - try to use text to image.

@ewook ewook added the bug Something isn't working label Apr 19, 2024
@ewook
Copy link
Author

ewook commented Apr 25, 2024

Perhaps it's related to this -
mudler/LocalAI#1910 ?

@khoschi
Copy link

khoschi commented May 13, 2024

I can confirm for the current docker localai container, that
"response_format": {"type": "url"}, works, while

curl http://localhost:8080/v1/images/generations -H "Content-Type: application/json" -d '{
"prompt": "A cute baby sea otter",
"model": "stablediffusion",
"n":1,
"response_format": "url",
"size": "256x256",
"user": "go-gpt-cli"
}'
fails on the docker container. As nextcloud issues without type: image generation fails.

Anyway, "Summarize", "Context" and "Reformulate" fail with localai as well, while direct access to the container works.

@ewook
Copy link
Author

ewook commented May 13, 2024

Thank you for adding the summarize, context and reformulate parts as well - I have tried them before and it worked then - I did re-check now, and context, summarize, generate headline works for me - have not tried transcribe. image generation still gives the same result. I've since creation of this issue upgraded nextcloud to version 29, assistant is at 1.0.9, and openai localai integration on 2.0.1. localai container reports v2.14.0 (b58274b8a26a3d22605e3c484cf39c5dd9a5cf8e).

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

2 participants