Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Still Anthropic API key error #82

Closed
MedleMedler opened this issue Jul 21, 2024 · 109 comments
Closed

Still Anthropic API key error #82

MedleMedler opened this issue Jul 21, 2024 · 109 comments
Assignees

Comments

@MedleMedler
Copy link

I also get this error message, after exactly following your API instructions:

^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Error occurred when executing Griptape Agent Config: Anthropic:

No API key provided. You can set your API key in code using 'voyageai.api_key = ', or set the environment variable VOYAGE_API_KEY=). If your API key is stored in a file, you can point the voyageai module at it with 'voyageai.api_key_path = ', or set the environment variable VOYAGE_API_KEY_PATH=. API keys can be generated in Voyage AI's dashboard (https://dash.voyageai.com).

File "F:!COMFYUI\ComfyUI_windows_portable\ComfyUI\execution.py", line 151, in recursive_execute
output_data, output_ui = get_output_data(obj, input_data_all)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

I red the previous message about Antropic API key and I now understand that putting the Anthropic API Key in the .env file, does not work anymore.?

Will you make a workaround, or are we stuck to use Voyage AI?
If we are stuck to Voyage AI, can you please make a tutorial how to exactly make this work with Voyage AI?

Maybe off topic, but do I understand correctly that you have to pay Anthrophic for generated ai data and now also have to pay Voyage AI for generated ai data that goes through their API embedding model?

Fur sure off topic (but worth mentioning) I already saw that Voyage Ai (and also Open Ai) only accepts credit cards. That's a big disappointment for everybody without credit card, they then also have pay credit card fees. Pay, pay, pay,... anyway.

Copy link
Collaborator

shhlife commented Jul 21, 2024

Hi! I have a fix for this coming tomorrow that lets you choose a different embedding driver, just doing some final testing and then I’ll release it. :)

@shhlife
Copy link
Collaborator

shhlife commented Jul 21, 2024

Here's an example with the upcoming release where I'm using Anthropic as the prompt driver, and I don't have anything else defined. My .env file only has the ANTHROPIC_API_KEY in it, so there's no embedding driver defined.

image

@shhlife
Copy link
Collaborator

shhlife commented Jul 22, 2024

Okay, I've just pushed the latest update. Please check it out and let me know what you think!

@MedleMedler
Copy link
Author

Hi Jason thank you very much. I will have a look at it today and give you feedback. :-)

@shhlife shhlife self-assigned this Jul 22, 2024
@MedleMedler
Copy link
Author

Ok, I get this message:
Error code: 400 - {'type': 'error', 'error': {'type': 'invalid_request_error', 'message': 'Your credit balance is too low to access the Claude API. Please go to Plans & Billing to upgrade or purchase credits.'}}

Seams like it is working fine. As soon as I am up and running, I will let you know.

@shhlife
Copy link
Collaborator

shhlife commented Jul 22, 2024

Cool - thank you very much!

@MedleMedler
Copy link
Author

MedleMedler commented Jul 23, 2024

Your welcome Jason. I would like to become involved with Griptape. This is a diamond in the rough. Griptape could make Confyui a platform that connects everything wit anything. The key to unlock Confyui (and make it "Comfyui Everything") is more tools, tools and tools.

Your file management tool is a great step in the good direction. But what about a file manger tool plus, (e.g. for moving, copying and deleting files). With File Manager Plus, you can make your Griptape Agent a database manager, who can organise, rearrange files, asses files (e.g.: delete all images below a certain aesthetic image quality based on a set of rules (E.g. to automate your back end office for all the generated images)).

The sky is the limit Jason. Griptape could make Confyui a platform where all disciplines merge together, giving everyone unprecedented control in building their own sophisticated agent workflows (merged with all other Confyui nodes) for any business, any art, any specialism, any profession and/or any field.

I assume you have discord, where one can have more in depth conversations about Griptape?

@shhlife
Copy link
Collaborator

shhlife commented Jul 23, 2024

Thanks for the feedback! I'm really enjoying the response people are having to the integration of the Framework into a system like ComfyUI - it's been amazing. :)

We do have a Discord, you can get to it here from ComfyUI by clicking with the Right Mouse Button and going to the Discord menu:

image

Or, you can just go here: https://discord.gg/griptape

@shhlife shhlife closed this as completed Jul 23, 2024
@MedleMedler
Copy link
Author

Hi Jason. I am trying Gripetape out. First of all, I am learning a lot about the LLM's (chatbots) to get a good grasp of what is possible and what not. It is difficult to give good feedback, if you do not know how ai's work through Griptape in depth and where Griptape stands in usability in general.

But for now I can share this:

  • the Ollama config module is not always consistent and sometimes gives a blank field in the "prompt model" field. It can suddenly dissapear if you change something (getting an error message) and you can not select the Lama Model in that field anymore. Which means you have to delete this module, select a new one and start all over again.
  • I also notice that the ChatGTP 4o ai can give error message that their system is overloaded (to crowded). Nothing you can do about this, but has indirectly influence on the experience of using Griptape. Paying money for using ChatGTP 4o and getting errors for overcrowding is annoying makes it inconsistent.

@MedleMedler
Copy link
Author

Another one: When I use ChatGTP 4o (with its configuration model connected to the "create agent module"), ChatGTP runs every generation, although I have set "control after generate" option to fixed. This is un practical because the ai every time the ai generates a new answer, losing the old one.

It would be wonderful if you have long connected forks with several "create agent modules" (every agent connected with his own configuration model) and you can set every agent to "fixed", " or "random" (with each agent his own seed).

@shhlife
Copy link
Collaborator

shhlife commented Aug 4, 2024

Hi @MedleMedler

Yes, I've been noticing the issue with Ollama as well. I'm finding that llama3 is working more consistently than 3.1 - which is odd. It's also having issues with Tools - I've added a ticket to track that internally.

As for the prompt model field, I just pushed an update this morning that should hopefully solve the model missing from the input - please let me know after your latest update if that fixes it.

With ChatGPT4 getting overloaded - This is very frustrating, I know. I'm hopeful they'll resolve the issues soon! you can always try going to 4o-mini, 4, or 3.5 to see if that takes care of things for a little while?

Also, I set the default control_after_generate to fixed now for all config nodes - this will hopefully solve the re-generation problem!

Please take a look and let me know if these fixes work for you.

cheers,
Jason

@shhlife shhlife reopened this Aug 4, 2024
@yudhisteer
Copy link

Hi,
where should we put our .env?

@MedleMedler
Copy link
Author

Hi Jason,

I have been very busy (and will be for the next period). As soon I am able to test it, I will let you know. :-)

@shhlife
Copy link
Collaborator

shhlife commented Aug 26, 2024

@yudhisteer you can put the .env file in your base ComfyUI directory. Or, you're welcome to use the Griptape Agent Config: Environment Variables node and save your api keys there. :)

@MedleMedler - I'm closing the ticket for now, but please let me know when you're able to test!

@shhlife shhlife closed this as completed Aug 26, 2024
@MedleMedler
Copy link
Author

MedleMedler commented Aug 31, 2024

Hi Jason,

I finally had some time to evaluate:

  • The nodes work better now.
  • The "Griptape Agent Config: Ollama" crashed a few times, but much less.
  • I would like more output ports, so the ai can diversify its outputs and will give the agent nodes much more functionality. Now I let ai make 1 total output string (with multiple sub outputs) and then do tricks with text nodes to separate the sub outputs. This is sensitive for errors, because the ai model has to be trained enough to be consistent in the output format.
    An other option would be the use of multiple agents, but this seams more time consuming and slows the process down.
  • Two days ago Meta came with a new open source ai model (hermes3) that blows my socks off. There was a lot of attention for llama 3.1. , because this was important for Meta to let everybody know. Now there is not so much attention for hermes3, but don't let the low attention mislead you.
    Hermes3 is by far much better then Llama3.1, is very consistent, can act very autonomous and has incredible visual analysing qualities. It's visual analysing qualities are so good, that you have an aesthetic scorer out of the box. E.g. you simply can ask hermes3 to output a score and can use this value to to NOT save any images below a certain value.
  • What I also discovered is that if you save the model (e.g. command: Ollama cp hermes3 hermes3_trained), you save a copy of the trained model, without writing any code. It seems like this improves the output quality, but I am not sure yet. Could well be this is not the case; do you know if you can save trained models (by using them in Confyui) this way?

I would like to advise you to take this with you in your next youtube videos. :-)

I will keep contact using your Discord page in future and am curious how Griptape Confyui will develop. :-)

@MedleMedler
Copy link
Author

I forgot to mention, that I don't know if hermes3 is intelligent enough to handle complex tasks (and dito GripTape workflows) like chatgtp can and (Llama 3.1 not). It wouldn't surprise me if hermes3 can and there will be no more need for separate GripTape workflows.
For sure you will try hermes3 out by now. Please let me know if herems3 can also handle more complex GripTape workflows, like Chat gpt and Claude can.

@shhlife
Copy link
Collaborator

shhlife commented Aug 31, 2024

Thanks for the notes! I’ll give hermes3 a try! You may be able to use the OpenAiCompatible nodes and ping their server..

https://docs.lambdalabs.com/on-demand-cloud/using-the-lambda-chat-completions-api

If you can replicate the situations where you find ollama crashing, I’d love to try and solve it.

as for multiple outputs, what kind of things did you have in mind? I’d be happy to explore alternate outputs and see if I can do anything dynamic on the nodes based on their evaluation!

cheers :)

@MedleMedler
Copy link
Author

MedleMedler commented Aug 31, 2024

Sure, will do! Let you know if I have a crash.

Good plan! In that case actually, the dream node would be multiple string inputs (and outputs) and multiple int, number and float inputs (or an option to choose). And like way like you have now: where inputs/outputs build out, when you connect one.

I see it like this: because of the "all round intelligence" of an ai chatbot, it is like magic in a box. Your Create agent node should be a "Stars Wars mothership", with endless ports to launch from and land on. In this case functionality creates opportunity and will create endless implementations

@shhlife
Copy link
Collaborator

shhlife commented Aug 31, 2024

So the agent does a run for every input, and has a resulting output per input - or based on their agents “work” it determines what the outputs should be?

Woild you be able to draw a graph of what you are thinking? It can be pretty rough - just to help me visualize it :)

@MedleMedler
Copy link
Author

Good question. Like a Star Wars mothership, it should have independent inputs and outputs. That gives you more control. E.g. when you are satisfied with one output, you can freeze it and keep it that way. Also save time in calculation, the ai only calculates when it's needed when can freeze certain outputs.

Their are 4 kinds of input: string, number, integer and float. Number, integer and float, can have 1 input/output if have a choose option. In that choose option (scroll down menu) you can then. So that means that there are two types of input and output: string and number/ integer/float. Two kinds of inputs on the left side and two kinds of output on the right side.

Minimum 2/2 IN/OUT and as soon as you connect an in/output, the next in/output appears.

I think this will make compact multi functionality in one case.

Do you have any thoughts what else could be integrated?

@MedleMedler
Copy link
Author

You can also put an freeze/activate option also in those little scroll down menus

@MedleMedler
Copy link
Author

MedleMedler commented Aug 31, 2024

Something else: a text editor node would improve the usability significantly.
I mean a text editor, for making text bold/color/italic/size. That way you have way more overview of e.g. your rule sets and organise your rules better with less mistakes.

A text editor node would also expand the kind use and users. It would also attract writers, who want to build writing workflows (with automatic notes, etc.,etc.).

@MedleMedler
Copy link
Author

And all other basic text editor functionality, like saving in different formats

@MedleMedler
Copy link
Author

I don't you need a visual now, or do you? Let me know, I am happy to make one.

I am also curious what your thoughts are?

@shhlife
Copy link
Collaborator

shhlife commented Aug 31, 2024

All are great ideas! A drawing would still be helpful - I want to make sure I’m imagining it the same way :)

some things (like the wysiwyg text editor) may be better handled as a default comfyui widget - I’ll look around and see if one exists.

in the meantime, keep the ideas coming! If you want to create new feature requests for each one that would help me keep track of them. :)

@MedleMedler
Copy link
Author

MedleMedler commented Sep 1, 2024

Hermes told me that he has influence over its own temperature. He says that he can give temperature values to different topics. If this is true, then there should be multiple temperature fields/inputs in de config node.

I have made a rule, saying hermes that the temperature for output format, must always be 0.1., regardless if I make the minimum temperature higher.

Ideal would be, that you dynamically can set multi temperatures to diversify the temperature per topic (which you then tell the ai in the rule set)

@MedleMedler
Copy link
Author

Ok so as default you give the LLM all the data for making the least mistakes.
When a question is private, or LLM gets confused because of to much data, you turn of prompt ON.

I am curious if this is also valid when you want to save the trained LLM. I wait for you to check saving a trained LLM as a copy, will embed the training into the LLM.

@MedleMedler
Copy link
Author

Does off_prompt = OFF mean that the LLM does remember the whole conversation and not only the last question? I think not.

I notice that off_prompt = OFF makes the LLM faster. Am I correct?

@shhlife
Copy link
Collaborator

shhlife commented Sep 5, 2024

off_pompt=OFF really only matters for the output of tools. the Agent has conversation memory, and each time it sends a message to the LLM it will include the history of it's interaction with the LLM - but each run of the LLM is "fresh".. so if you don't connect agent -> agent, it won't remember.

Here's a screenshot showing:

image

@shhlife
Copy link
Collaborator

shhlife commented Sep 5, 2024

btw - having off_prompt=OFF does indeed go faster, because we don't need to query the memory for the LocalVectorStore.

@MedleMedler
Copy link
Author

MedleMedler commented Sep 5, 2024

I changed from Llama 3.1 to Hermes 70B and immediately got this message:

Prompt executed in 0.04 seconds
got prompt
!!! Exception during processing !!! The api_key client option must be set either by passing api_key to the client or by setting the OPENAI_API_KEY environment variable
Traceback (most recent call last):
File "D:\ComfyUI_windows_portable_nvidia\ComfyUI\execution.py", line 317, in execute
output_data, output_ui, has_subgraph = get_output_data(obj, input_data_all, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\ComfyUI_windows_portable_nvidia\ComfyUI\execution.py", line 192, in get_output_data
return_values = _map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\ComfyUI_windows_portable_nvidia\ComfyUI\execution.py", line 169, in _map_node_over_list
process_inputs(input_dict, i)
File "D:\ComfyUI_windows_portable_nvidia\ComfyUI\execution.py", line 158, in process_inputs
results.append(getattr(obj, func)(**inputs))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\ComfyUI_windows_portable_nvidia\ComfyUI\custom_nodes\ComfyUI-Griptape\nodes\agent\RunAgent.py", line 23, in run
self.agent = gtComfyAgent()
^^^^^^^^^^^^^^
File "D:\ComfyUI_windows_portable_nvidia\ComfyUI\custom_nodes\ComfyUI-Griptape\nodes\agent\gtComfyAgent.py", line 36, in init
super().init(*args, **kwargs)
File "", line 26, in init
_setattr('task_memory', __attr_factory_task_memory(self))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\ComfyUI_windows_portable_nvidia\python_embeded\Lib\site-packages\griptape\structures\structure.py", line 33, in
default=Factory(lambda self: TaskMemory(), takes_self=True),
^^^^^^^^^^^^
File "", line 13, in init
_setattr('artifact_storages', __attr_factory_artifact_storages())
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\ComfyUI_windows_portable_nvidia\python_embeded\Lib\site-packages\griptape\memory\task\task_memory.py", line 23, in
TextArtifact: TextArtifactStorage(),
^^^^^^^^^^^^^^^^^^^^^
File "", line 5, in init
self.vector_store_driver = __attr_factory_vector_store_driver()
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\ComfyUI_windows_portable_nvidia\python_embeded\Lib\site-packages\griptape\memory\task\storage\text_artifact_storage.py", line 18, in
default=Factory(lambda: Defaults.drivers_config.vector_store_driver)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\ComfyUI_windows_portable_nvidia\python_embeded\Lib\site-packages\griptape\utils\decorators.py", line 42, in lazy_attr
setattr(self, actual_attr_name, func(self))
^^^^^^^^^^
File "D:\ComfyUI_windows_portable_nvidia\python_embeded\Lib\site-packages\griptape\configs\drivers\openai_drivers_config.py", line 36, in vector_store_driver
return LocalVectorStoreDriver(embedding_driver=OpenAiEmbeddingDriver(model="text-embedding-3-small"))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "", line 21, in init
self.client = __attr_factory_client(self)
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\ComfyUI_windows_portable_nvidia\python_embeded\Lib\site-packages\griptape\drivers\embedding\openai_embedding_driver.py", line 38, in
lambda self: openai.OpenAI(api_key=self.api_key, base_url=self.base_url, organization=self.organization),
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\ComfyUI_windows_portable_nvidia\python_embeded\Lib\site-packages\openai_client.py", line 105, in init
raise OpenAIError(
openai.OpenAIError: The api_key client option must be set either by passing api_key to the client or by setting the OPENAI_API_KEY environment variable

Prompt executed in 0.04 seconds

afbeelding

@shhlife
Copy link
Collaborator

shhlife commented Sep 5, 2024

and that doesn't happen with the other hermes model, or llama3.1?

@MedleMedler
Copy link
Author

Happens with all other models, even going back to llama 3.1. doesn't work anymore

@MedleMedler
Copy link
Author

Even restarting Comfyui doesn't help.

@MedleMedler
Copy link
Author

MedleMedler commented Sep 5, 2024

In all cases I also get this popup screen in Comfyui:

Griptape Run: Agent

The api_key client option must be set either by passing api_key to the client or by setting the OPENAI_API_KEY environment variable

ComfyUI Error Report

Error Details

  • Node Type: Griptape Run: Agent
  • Exception Type: openai.OpenAIError
  • Exception Message: The api_key client option must be set either by passing api_key to the client or by setting the OPENAI_API_KEY environment variable

Stack Trace

  File "D:\ComfyUI_windows_portable_nvidia\ComfyUI\execution.py", line 317, in execute
    output_data, output_ui, has_subgraph = get_output_data(obj, input_data_all, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb)
                                           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

  File "D:\ComfyUI_windows_portable_nvidia\ComfyUI\execution.py", line 192, in get_output_data
    return_values = _map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb)
                    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

  File "D:\ComfyUI_windows_portable_nvidia\ComfyUI\execution.py", line 169, in _map_node_over_list
    process_inputs(input_dict, i)

  File "D:\ComfyUI_windows_portable_nvidia\ComfyUI\execution.py", line 158, in process_inputs
    results.append(getattr(obj, func)(**inputs))
                   ^^^^^^^^^^^^^^^^^^^^^^^^^^^^

  File "D:\ComfyUI_windows_portable_nvidia\ComfyUI\custom_nodes\ComfyUI-Griptape\nodes\agent\RunAgent.py", line 23, in run
    self.agent = gtComfyAgent()
                 ^^^^^^^^^^^^^^

  File "D:\ComfyUI_windows_portable_nvidia\ComfyUI\custom_nodes\ComfyUI-Griptape\nodes\agent\gtComfyAgent.py", line 36, in __init__
    super().__init__(*args, **kwargs)

  File "<attrs generated init griptape.structures.agent.Agent>", line 26, in __init__
    _setattr('task_memory', __attr_factory_task_memory(self))
                            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

  File "D:\ComfyUI_windows_portable_nvidia\python_embeded\Lib\site-packages\griptape\structures\structure.py", line 33, in <lambda>
    default=Factory(lambda self: TaskMemory(), takes_self=True),
                                 ^^^^^^^^^^^^

  File "<attrs generated init griptape.memory.task.task_memory.TaskMemory>", line 13, in __init__
    _setattr('artifact_storages', __attr_factory_artifact_storages())
                                  ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

  File "D:\ComfyUI_windows_portable_nvidia\python_embeded\Lib\site-packages\griptape\memory\task\task_memory.py", line 23, in <lambda>
    TextArtifact: TextArtifactStorage(),
                  ^^^^^^^^^^^^^^^^^^^^^

  File "<attrs generated init griptape.memory.task.storage.text_artifact_storage.TextArtifactStorage>", line 5, in __init__
    self.vector_store_driver = __attr_factory_vector_store_driver()
                               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

  File "D:\ComfyUI_windows_portable_nvidia\python_embeded\Lib\site-packages\griptape\memory\task\storage\text_artifact_storage.py", line 18, in <lambda>
    default=Factory(lambda: Defaults.drivers_config.vector_store_driver)
                            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

  File "D:\ComfyUI_windows_portable_nvidia\python_embeded\Lib\site-packages\griptape\utils\decorators.py", line 42, in lazy_attr
    setattr(self, actual_attr_name, func(self))
                                    ^^^^^^^^^^

  File "D:\ComfyUI_windows_portable_nvidia\python_embeded\Lib\site-packages\griptape\configs\drivers\openai_drivers_config.py", line 36, in vector_store_driver
    return LocalVectorStoreDriver(embedding_driver=OpenAiEmbeddingDriver(model="text-embedding-3-small"))
                                                   ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

  File "<attrs generated init griptape.drivers.embedding.openai_embedding_driver.OpenAiEmbeddingDriver>", line 21, in __init__
    self.client = __attr_factory_client(self)
                  ^^^^^^^^^^^^^^^^^^^^^^^^^^^

  File "D:\ComfyUI_windows_portable_nvidia\python_embeded\Lib\site-packages\griptape\drivers\embedding\openai_embedding_driver.py", line 38, in <lambda>
    lambda self: openai.OpenAI(api_key=self.api_key, base_url=self.base_url, organization=self.organization),
                 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

  File "D:\ComfyUI_windows_portable_nvidia\python_embeded\Lib\site-packages\openai\_client.py", line 105, in __init__
    raise OpenAIError(

System Information

  • ComfyUI Version: v0.2.1-5-g5cbaa9e
  • Arguments: ComfyUI\main.py --windows-standalone-build
  • OS: nt
  • Python Version: 3.11.9 (tags/v3.11.9:de54cf5, Apr 2 2024, 10:12:12) [MSC v.1938 64 bit (AMD64)]
  • Embedded Python: true
  • PyTorch Version: 2.3.1+cu121

Devices

  • Name: cuda:0 NVIDIA GeForce RTX 2080 Ti : cudaMallocAsync
    • Type: cuda
    • VRAM Total: 11810832384
    • VRAM Free: 10585374720
    • Torch VRAM Total: 0
    • Torch VRAM Free: 0

Logs

2024-09-05 23:34:38,293 - root - INFO - Total VRAM 11264 MB, total RAM 65424 MB
2024-09-05 23:34:38,294 - root - INFO - pytorch version: 2.3.1+cu121
2024-09-05 23:34:38,307 - root - INFO - Set vram state to: NORMAL_VRAM
2024-09-05 23:34:38,307 - root - INFO - Device: cuda:0 NVIDIA GeForce RTX 2080 Ti : cudaMallocAsync
2024-09-05 23:34:40,642 - root - INFO - Using pytorch cross attention
2024-09-05 23:34:44,958 - root - INFO - [Prompt Server] web root: D:\ComfyUI_windows_portable_nvidia\ComfyUI\web
2024-09-05 23:34:44,972 - root - INFO - Adding extra search path checkpoints path/to/stable-diffusion-webui/models/Stable-diffusion
2024-09-05 23:34:44,972 - root - INFO - Adding extra search path configs path/to/stable-diffusion-webui/models/Stable-diffusion
2024-09-05 23:34:44,972 - root - INFO - Adding extra search path vae path/to/stable-diffusion-webui/models/VAE
2024-09-05 23:34:44,973 - root - INFO - Adding extra search path loras path/to/stable-diffusion-webui/models/Lora
2024-09-05 23:34:44,973 - root - INFO - Adding extra search path loras path/to/stable-diffusion-webui/models/LyCORIS
2024-09-05 23:34:44,973 - root - INFO - Adding extra search path upscale_models path/to/stable-diffusion-webui/models/ESRGAN
2024-09-05 23:34:44,973 - root - INFO - Adding extra search path upscale_models path/to/stable-diffusion-webui/models/RealESRGAN
2024-09-05 23:34:44,973 - root - INFO - Adding extra search path upscale_models path/to/stable-diffusion-webui/models/SwinIR
2024-09-05 23:34:44,975 - root - INFO - Adding extra search path embeddings path/to/stable-diffusion-webui/embeddings
2024-09-05 23:34:44,976 - root - INFO - Adding extra search path hypernetworks path/to/stable-diffusion-webui/models/hypernetworks
2024-09-05 23:34:44,978 - root - INFO - Adding extra search path controlnet path/to/stable-diffusion-webui/models/ControlNet
2024-09-05 23:34:44,979 - root - INFO - Adding extra search path checkpoints D:\\Comfyui Models Database\models/checkpoints/
2024-09-05 23:34:44,980 - root - INFO - Adding extra search path clip D:\\Comfyui Models Database\models/clip/
2024-09-05 23:34:44,980 - root - INFO - Adding extra search path clip_vision D:\\Comfyui Models Database\models/clip_vision/
2024-09-05 23:34:44,981 - root - INFO - Adding extra search path configs D:\\Comfyui Models Database\models/configs/
2024-09-05 23:34:44,982 - root - INFO - Adding extra search path controlnet D:\\Comfyui Models Database\models/controlnet/
2024-09-05 23:34:44,983 - root - INFO - Adding extra search path embeddings D:\\Comfyui Models Database\models/embeddings/
2024-09-05 23:34:44,984 - root - INFO - Adding extra search path loras D:\\Comfyui Models Database\models/loras/
2024-09-05 23:34:54,155 - chromadb.telemetry.product.posthog - INFO - Anonymized telemetry enabled. See                     https://docs.trychroma.com/telemetry for more information.
2024-09-05 23:35:01,990 - root - INFO - Total VRAM 11264 MB, total RAM 65424 MB
2024-09-05 23:35:01,991 - root - INFO - pytorch version: 2.3.1+cu121
2024-09-05 23:35:01,995 - root - INFO - Set vram state to: NORMAL_VRAM
2024-09-05 23:35:01,995 - root - INFO - Device: cuda:0 NVIDIA GeForce RTX 2080 Ti : cudaMallocAsync
2024-09-05 23:35:06,254 - root - WARNING - Traceback (most recent call last):
  File "D:\ComfyUI_windows_portable_nvidia\ComfyUI\nodes.py", line 1993, in load_custom_node
    module_spec.loader.exec_module(module)
  File "<frozen importlib._bootstrap_external>", line 940, in exec_module
  File "<frozen importlib._bootstrap>", line 241, in _call_with_frames_removed
  File "D:\ComfyUI_windows_portable_nvidia\ComfyUI\custom_nodes\ComfyUI-LexTools\__init__.py", line 2, in <module>
    from .nodes import SegformerNode,ImageCaptioningNode,ImageProcessingNode
  File "D:\ComfyUI_windows_portable_nvidia\ComfyUI\custom_nodes\ComfyUI-LexTools\nodes\ImageProcessingNode.py", line 2, in <module>
    from fastapi import FastApi
ImportError: cannot import name 'FastApi' from 'fastapi' (D:\ComfyUI_windows_portable_nvidia\python_embeded\Lib\site-packages\fastapi\__init__.py)

2024-09-05 23:35:06,255 - root - WARNING - Cannot import D:\ComfyUI_windows_portable_nvidia\ComfyUI\custom_nodes\ComfyUI-LexTools module for custom nodes: cannot import name 'FastApi' from 'fastapi' (D:\ComfyUI_windows_portable_nvidia\python_embeded\Lib\site-packages\fastapi\__init__.py)
2024-09-05 23:35:18,144 - root - WARNING - Traceback (most recent call last):
  File "D:\ComfyUI_windows_portable_nvidia\ComfyUI\nodes.py", line 1993, in load_custom_node
    module_spec.loader.exec_module(module)
  File "<frozen importlib._bootstrap_external>", line 940, in exec_module
  File "<frozen importlib._bootstrap>", line 241, in _call_with_frames_removed
  File "D:\ComfyUI_windows_portable_nvidia\ComfyUI\custom_nodes\Endless-Nodes\__init__.py", line 7, in <module>
    from .endless_nodes import *
  File "D:\ComfyUI_windows_portable_nvidia\ComfyUI\custom_nodes\Endless-Nodes\endless_nodes.py", line 53, in <module>
    import ImageReward as RM
ModuleNotFoundError: No module named 'ImageReward'

2024-09-05 23:35:18,145 - root - WARNING - Cannot import D:\ComfyUI_windows_portable_nvidia\ComfyUI\custom_nodes\Endless-Nodes module for custom nodes: No module named 'ImageReward'
2024-09-05 23:35:21,708 - root - INFO - 
Import times for custom nodes:
2024-09-05 23:35:21,709 - root - INFO -    0.0 seconds: D:\ComfyUI_windows_portable_nvidia\ComfyUI\custom_nodes\ComfyUI-Photopea
2024-09-05 23:35:21,710 - root - INFO -    0.0 seconds: D:\ComfyUI_windows_portable_nvidia\ComfyUI\custom_nodes\websocket_image_save.py
2024-09-05 23:35:21,711 - root - INFO -    0.0 seconds: D:\ComfyUI_windows_portable_nvidia\ComfyUI\custom_nodes\canvas_tab
2024-09-05 23:35:21,712 - root - INFO -    0.0 seconds: D:\ComfyUI_windows_portable_nvidia\ComfyUI\custom_nodes\ComfyUI-SceneGenerator
2024-09-05 23:35:21,713 - root - INFO -    0.0 seconds: D:\ComfyUI_windows_portable_nvidia\ComfyUI\custom_nodes\ComfyUI-Book-Tools
2024-09-05 23:35:21,713 - root - INFO -    0.0 seconds: D:\ComfyUI_windows_portable_nvidia\ComfyUI\custom_nodes\ComfyUI_IPAdapter_plus
2024-09-05 23:35:21,714 - root - INFO -    0.0 seconds: D:\ComfyUI_windows_portable_nvidia\ComfyUI\custom_nodes\ComfyUI-mxToolkit
2024-09-05 23:35:21,715 - root - INFO -    0.0 seconds: D:\ComfyUI_windows_portable_nvidia\ComfyUI\custom_nodes\ComfyUI-post-processing-nodes
2024-09-05 23:35:21,715 - root - INFO -    0.0 seconds: D:\ComfyUI_windows_portable_nvidia\ComfyUI\custom_nodes\A8R8_ComfyUI_nodes
2024-09-05 23:35:21,716 - root - INFO -    0.0 seconds: D:\ComfyUI_windows_portable_nvidia\ComfyUI\custom_nodes\ComfyUI-noEmbryo
2024-09-05 23:35:21,717 - root - INFO -    0.0 seconds: D:\ComfyUI_windows_portable_nvidia\ComfyUI\custom_nodes\comfyui_controlnet_aux
2024-09-05 23:35:21,718 - root - INFO -    0.0 seconds: D:\ComfyUI_windows_portable_nvidia\ComfyUI\custom_nodes\ComfyUI-ELLA
2024-09-05 23:35:21,720 - root - INFO -    0.0 seconds: D:\ComfyUI_windows_portable_nvidia\ComfyUI\custom_nodes\comfyui-various
2024-09-05 23:35:21,721 - root - INFO -    0.0 seconds: D:\ComfyUI_windows_portable_nvidia\ComfyUI\custom_nodes\ComfyUI_essentials
2024-09-05 23:35:21,721 - root - INFO -    0.0 seconds: D:\ComfyUI_windows_portable_nvidia\ComfyUI\custom_nodes\ComfyUI-Impact-Pack
2024-09-05 23:35:21,722 - root - INFO -    0.0 seconds: D:\ComfyUI_windows_portable_nvidia\ComfyUI\custom_nodes\WAS_Extras
2024-09-05 23:35:21,723 - root - INFO -    0.0 seconds: D:\ComfyUI_windows_portable_nvidia\ComfyUI\custom_nodes\ComfyMath
2024-09-05 23:35:21,724 - root - INFO -    0.0 seconds: D:\ComfyUI_windows_portable_nvidia\ComfyUI\custom_nodes\comfyui-lf
2024-09-05 23:35:21,724 - root - INFO -    0.0 seconds: D:\ComfyUI_windows_portable_nvidia\ComfyUI\custom_nodes\ComfyUI-Advanced-ControlNet
2024-09-05 23:35:21,725 - root - INFO -    0.0 seconds: D:\ComfyUI_windows_portable_nvidia\ComfyUI\custom_nodes\ComfyUI-Marigold
2024-09-05 23:35:21,726 - root - INFO -    0.0 seconds: D:\ComfyUI_windows_portable_nvidia\ComfyUI\custom_nodes\x-flux-comfyui
2024-09-05 23:35:21,727 - root - INFO -    0.0 seconds: D:\ComfyUI_windows_portable_nvidia\ComfyUI\custom_nodes\ComfyUI-segment-anything-2
2024-09-05 23:35:21,727 - root - INFO -    0.0 seconds (IMPORT FAILED): D:\ComfyUI_windows_portable_nvidia\ComfyUI\custom_nodes\Endless-Nodes
2024-09-05 23:35:21,730 - root - INFO -    0.0 seconds: D:\ComfyUI_windows_portable_nvidia\ComfyUI\custom_nodes\rgthree-comfy
2024-09-05 23:35:21,730 - root - INFO -    0.1 seconds: D:\ComfyUI_windows_portable_nvidia\ComfyUI\custom_nodes\ComfyUI-KJNodes
2024-09-05 23:35:21,731 - root - INFO -    0.2 seconds: D:\ComfyUI_windows_portable_nvidia\ComfyUI\custom_nodes\ComfyUI-IF_AI_tools
2024-09-05 23:35:21,732 - root - INFO -    0.2 seconds: D:\ComfyUI_windows_portable_nvidia\ComfyUI\custom_nodes\comfyui-dream-project
2024-09-05 23:35:21,733 - root - INFO -    0.2 seconds: D:\ComfyUI_windows_portable_nvidia\ComfyUI\custom_nodes\ComfyUI-BrushNet-Wrapper
2024-09-05 23:35:21,734 - root - INFO -    0.2 seconds: D:\ComfyUI_windows_portable_nvidia\ComfyUI\custom_nodes\blibla-comfyui-extensions
2024-09-05 23:35:21,734 - root - INFO -    0.5 seconds: D:\ComfyUI_windows_portable_nvidia\ComfyUI\custom_nodes\comfyui-ollama
2024-09-05 23:35:21,735 - root - INFO -    0.7 seconds: D:\ComfyUI_windows_portable_nvidia\ComfyUI\custom_nodes\ComfyUI_MileHighStyler
2024-09-05 23:35:21,736 - root - INFO -    0.7 seconds: D:\ComfyUI_windows_portable_nvidia\ComfyUI\custom_nodes\ComfyUI-BrushNet
2024-09-05 23:35:21,736 - root - INFO -    0.8 seconds: D:\ComfyUI_windows_portable_nvidia\ComfyUI\custom_nodes\ComfyUI_VLM_nodes
2024-09-05 23:35:21,737 - root - INFO -    0.9 seconds: D:\ComfyUI_windows_portable_nvidia\ComfyUI\custom_nodes\ComfyUI_Strimmlarns_aesthetic_score
2024-09-05 23:35:21,738 - root - INFO -    1.0 seconds: D:\ComfyUI_windows_portable_nvidia\ComfyUI\custom_nodes\ComfyUI-Manager
2024-09-05 23:35:21,739 - root - INFO -    1.8 seconds: D:\ComfyUI_windows_portable_nvidia\ComfyUI\custom_nodes\ComfyUI-Griptape
2024-09-05 23:35:21,741 - root - INFO -    2.0 seconds: D:\ComfyUI_windows_portable_nvidia\ComfyUI\custom_nodes\ComfyUI-Crystools
2024-09-05 23:35:21,742 - root - INFO -    2.1 seconds: D:\ComfyUI_windows_portable_nvidia\ComfyUI\custom_nodes\comfyui-art-venture
2024-09-05 23:35:21,743 - root - INFO -    3.5 seconds: D:\ComfyUI_windows_portable_nvidia\ComfyUI\custom_nodes\was-node-suite-comfyui
2024-09-05 23:35:21,743 - root - INFO -    4.2 seconds (IMPORT FAILED): D:\ComfyUI_windows_portable_nvidia\ComfyUI\custom_nodes\ComfyUI-LexTools
2024-09-05 23:35:21,744 - root - INFO -    7.7 seconds: D:\ComfyUI_windows_portable_nvidia\ComfyUI\custom_nodes\anynode
2024-09-05 23:35:21,745 - root - INFO -    7.9 seconds: D:\ComfyUI_windows_portable_nvidia\ComfyUI\custom_nodes\ComfyUI_Custom_Nodes_AlekPet
2024-09-05 23:35:21,746 - root - INFO - 
2024-09-05 23:35:21,772 - root - INFO - Starting server

2024-09-05 23:35:21,772 - root - INFO - To see the GUI go to: http://127.0.0.1:8188
2024-09-05 23:35:52,712 - root - INFO - got prompt
2024-09-05 23:35:52,768 - root - ERROR - !!! Exception during processing !!! The api_key client option must be set either by passing api_key to the client or by setting the OPENAI_API_KEY environment variable
2024-09-05 23:35:52,795 - root - ERROR - Traceback (most recent call last):
  File "D:\ComfyUI_windows_portable_nvidia\ComfyUI\execution.py", line 317, in execute
    output_data, output_ui, has_subgraph = get_output_data(obj, input_data_all, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb)
                                           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "D:\ComfyUI_windows_portable_nvidia\ComfyUI\execution.py", line 192, in get_output_data
    return_values = _map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb)
                    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "D:\ComfyUI_windows_portable_nvidia\ComfyUI\execution.py", line 169, in _map_node_over_list
    process_inputs(input_dict, i)
  File "D:\ComfyUI_windows_portable_nvidia\ComfyUI\execution.py", line 158, in process_inputs
    results.append(getattr(obj, func)(**inputs))
                   ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "D:\ComfyUI_windows_portable_nvidia\ComfyUI\custom_nodes\ComfyUI-Griptape\nodes\agent\RunAgent.py", line 23, in run
    self.agent = gtComfyAgent()
                 ^^^^^^^^^^^^^^
  File "D:\ComfyUI_windows_portable_nvidia\ComfyUI\custom_nodes\ComfyUI-Griptape\nodes\agent\gtComfyAgent.py", line 36, in __init__
    super().__init__(*args, **kwargs)
  File "<attrs generated init griptape.structures.agent.Agent>", line 26, in __init__
    _setattr('task_memory', __attr_factory_task_memory(self))
                            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "D:\ComfyUI_windows_portable_nvidia\python_embeded\Lib\site-packages\griptape\structures\structure.py", line 33, in <lambda>
    default=Factory(lambda self: TaskMemory(), takes_self=True),
                                 ^^^^^^^^^^^^
  File "<attrs generated init griptape.memory.task.task_memory.TaskMemory>", line 13, in __init__
    _setattr('artifact_storages', __attr_factory_artifact_storages())
                                  ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "D:\ComfyUI_windows_portable_nvidia\python_embeded\Lib\site-packages\griptape\memory\task\task_memory.py", line 23, in <lambda>
    TextArtifact: TextArtifactStorage(),
                  ^^^^^^^^^^^^^^^^^^^^^
  File "<attrs generated init griptape.memory.task.storage.text_artifact_storage.TextArtifactStorage>", line 5, in __init__
    self.vector_store_driver = __attr_factory_vector_store_driver()
                               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "D:\ComfyUI_windows_portable_nvidia\python_embeded\Lib\site-packages\griptape\memory\task\storage\text_artifact_storage.py", line 18, in <lambda>
    default=Factory(lambda: Defaults.drivers_config.vector_store_driver)
                            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "D:\ComfyUI_windows_portable_nvidia\python_embeded\Lib\site-packages\griptape\utils\decorators.py", line 42, in lazy_attr
    setattr(self, actual_attr_name, func(self))
                                    ^^^^^^^^^^
  File "D:\ComfyUI_windows_portable_nvidia\python_embeded\Lib\site-packages\griptape\configs\drivers\openai_drivers_config.py", line 36, in vector_store_driver
    return LocalVectorStoreDriver(embedding_driver=OpenAiEmbeddingDriver(model="text-embedding-3-small"))
                                                   ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "<attrs generated init griptape.drivers.embedding.openai_embedding_driver.OpenAiEmbeddingDriver>", line 21, in __init__
    self.client = __attr_factory_client(self)
                  ^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "D:\ComfyUI_windows_portable_nvidia\python_embeded\Lib\site-packages\griptape\drivers\embedding\openai_embedding_driver.py", line 38, in <lambda>
    lambda self: openai.OpenAI(api_key=self.api_key, base_url=self.base_url, organization=self.organization),
                 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "D:\ComfyUI_windows_portable_nvidia\python_embeded\Lib\site-packages\openai\_client.py", line 105, in __init__
    raise OpenAIError(
openai.OpenAIError: The api_key client option must be set either by passing api_key to the client or by setting the OPENAI_API_KEY environment variable

2024-09-05 23:35:52,799 - root - INFO - Prompt executed in 0.06 seconds
2024-09-05 23:37:17,906 - root - INFO - got prompt
2024-09-05 23:37:17,957 - root - ERROR - !!! Exception during processing !!! The api_key client option must be set either by passing api_key to the client or by setting the OPENAI_API_KEY environment variable
2024-09-05 23:37:17,959 - root - ERROR - Traceback (most recent call last):
  File "D:\ComfyUI_windows_portable_nvidia\ComfyUI\execution.py", line 317, in execute
    output_data, output_ui, has_subgraph = get_output_data(obj, input_data_all, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb)
                                           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "D:\ComfyUI_windows_portable_nvidia\ComfyUI\execution.py", line 192, in get_output_data
    return_values = _map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb)
                    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "D:\ComfyUI_windows_portable_nvidia\ComfyUI\execution.py", line 169, in _map_node_over_list
    process_inputs(input_dict, i)
  File "D:\ComfyUI_windows_portable_nvidia\ComfyUI\execution.py", line 158, in process_inputs
    results.append(getattr(obj, func)(**inputs))
                   ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "D:\ComfyUI_windows_portable_nvidia\ComfyUI\custom_nodes\ComfyUI-Griptape\nodes\agent\RunAgent.py", line 23, in run
    self.agent = gtComfyAgent()
                 ^^^^^^^^^^^^^^
  File "D:\ComfyUI_windows_portable_nvidia\ComfyUI\custom_nodes\ComfyUI-Griptape\nodes\agent\gtComfyAgent.py", line 36, in __init__
    super().__init__(*args, **kwargs)
  File "<attrs generated init griptape.structures.agent.Agent>", line 26, in __init__
    _setattr('task_memory', __attr_factory_task_memory(self))
                            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "D:\ComfyUI_windows_portable_nvidia\python_embeded\Lib\site-packages\griptape\structures\structure.py", line 33, in <lambda>
    default=Factory(lambda self: TaskMemory(), takes_self=True),
                                 ^^^^^^^^^^^^
  File "<attrs generated init griptape.memory.task.task_memory.TaskMemory>", line 13, in __init__
    _setattr('artifact_storages', __attr_factory_artifact_storages())
                                  ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "D:\ComfyUI_windows_portable_nvidia\python_embeded\Lib\site-packages\griptape\memory\task\task_memory.py", line 23, in <lambda>
    TextArtifact: TextArtifactStorage(),
                  ^^^^^^^^^^^^^^^^^^^^^
  File "<attrs generated init griptape.memory.task.storage.text_artifact_storage.TextArtifactStorage>", line 5, in __init__
    self.vector_store_driver = __attr_factory_vector_store_driver()
                               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "D:\ComfyUI_windows_portable_nvidia\python_embeded\Lib\site-packages\griptape\memory\task\storage\text_artifact_storage.py", line 18, in <lambda>
    default=Factory(lambda: Defaults.drivers_config.vector_store_driver)
                            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "D:\ComfyUI_windows_portable_nvidia\python_embeded\Lib\site-packages\griptape\utils\decorators.py", line 42, in lazy_attr
    setattr(self, actual_attr_name, func(self))
                                    ^^^^^^^^^^
  File "D:\ComfyUI_windows_portable_nvidia\python_embeded\Lib\site-packages\griptape\configs\drivers\openai_drivers_config.py", line 36, in vector_store_driver
    return LocalVectorStoreDriver(embedding_driver=OpenAiEmbeddingDriver(model="text-embedding-3-small"))
                                                   ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "<attrs generated init griptape.drivers.embedding.openai_embedding_driver.OpenAiEmbeddingDriver>", line 21, in __init__
    self.client = __attr_factory_client(self)
                  ^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "D:\ComfyUI_windows_portable_nvidia\python_embeded\Lib\site-packages\griptape\drivers\embedding\openai_embedding_driver.py", line 38, in <lambda>
    lambda self: openai.OpenAI(api_key=self.api_key, base_url=self.base_url, organization=self.organization),
                 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "D:\ComfyUI_windows_portable_nvidia\python_embeded\Lib\site-packages\openai\_client.py", line 105, in __init__
    raise OpenAIError(
openai.OpenAIError: The api_key client option must be set either by passing api_key to the client or by setting the OPENAI_API_KEY environment variable

2024-09-05 23:37:17,964 - root - INFO - Prompt executed in 0.03 seconds




@shhlife
Copy link
Collaborator

shhlife commented Sep 5, 2024

This is really strange - it looks like there are a lot of errors with other libraries as well.

if you create an OPENAI_API_KEY does it work as expected?

@MedleMedler
Copy link
Author

MedleMedler commented Sep 6, 2024

At this moment I do not use Open Ai and did not use/create/do anything with the API Key. Everything was working fine with llama3.1; as soon as I switched to hermes3, I got above error messages.

I can try to create an new OPENAI_API_KEY and look if this solves the issue.

"it looks like there are a lot of errors with other libraries as well".
I see that I have some custom nodes, that failed to import and will try to solve these errors to exclude possible causes.

@MedleMedler
Copy link
Author

I cleaned up Manager and looks like it is solved.

@shhlife
Copy link
Collaborator

shhlife commented Sep 6, 2024

excellent I like bugs like that. :)

@shhlife
Copy link
Collaborator

shhlife commented Sep 6, 2024

I have been playing around with making some nodes to create new versions of the llama3.1 models based on usage.. and while it's cool, I end up getting some very inconsistent results inside comfyUI. But here's where it's at so far..

in this example I'm creating an agent with llama3.1:latest. I give it a few rulesets and then run it a few times.
I then use the Create Modelfile node to create an Ollama Modelfile out of it, then it uses the Create Agent from Modelfile node.
then you can see I use the new model and it responds like a pirate!

Initially I had this all contained in a single node, but realized you might want to make some changes to the Modelfile, so I split it out into two.

image

@MedleMedler
Copy link
Author

MedleMedler commented Sep 7, 2024

That means we now stated that models can be trained in Confyui, by just using them.

Probably mixed training behaviour, could be solved by more training and or other techniques. How many training runs did you do, to let the new trained model change it's behaviour?
Mixed results could maybe be solved by also giving the new agent some good rules.

This new insight will bring us new ways of thinking about it and how we could design new techniques and workflows to optimise this functionality.
E.g. ai prompt writing can, now be improved with WebSearchTool, by letting the ai educate itself on prompt-writing in general and prompt-writing knowledge about the topic itself specific. This is now possible, because the LLM's can now be trained while using Comfyui.

Probably there will be many ways, solutions and opportunities, we didn't think off yet.

Also: the setting <off_prompt> ON or OFF now probably has influence on its training. Getting RAW data or REFINED data in its neural system, makes a difference in how the ai model stores this data into it's brain and also is trained.

This is so exiting Jason: cool!

@MedleMedler
Copy link
Author

MedleMedler commented Sep 7, 2024

At this moment you cannot communicate with the Confyui Checkpoints, like you can with a LLM. You can only communicate the prompt, but cannot meta communicate to the checkpoint to improve it's prompt skills (the LLM training the checkpoint). If that could somehow be possible, that you also also could meta communicate with a checkpoint through the LLM; then the LLM could improve the end result more, then only through prompt writing, like it does now.

ELLA is such a technique to improve prompt interpretation of older checkpoint models. A LLM which can add such functionality (one way or the other), would be a dream coming true. It would revolutionise improvement of Confyui over it's whole axis in a total different way. This also could be a custom node, like ELLA, but controlled by the LLM. The advantage would be, that the more the LLM is trained in writing ELLA-like code, the more (e.g.) prompt recognition would improve.

Another approach would be, looking how the checkpoint (and Ksampler) do function. Ideal would be a merge of a LLM and a checkpoint. Or a LLM with all functionality of a certain Confyui checkpoint in itself.
More direct access for the LLM to control the checkpoint, could be a perfect marriage. Then you could also use the strength of agent structures, steering the checkpoint with much more precision, control and detail .

The same for a kind of a custom node like ELLA, for improving checkpoint data before it goes into the sampler.

Or maybe this all also can be achieved by a specialised Ksampler which is modified for a LLM, giving the LLM more control (a LLM-Ksampler). A chatbot told me, that there are special tokens which the Ksampler uses to communicate with the checkpoint (steps:, plms:, dual_guidance:, cfg_scale, etc.), but that there are also special tokens which are not used. Like meta:, config:, condition:,. I am not sure if this is true, but there were some users who had success with the extra command "meta" and "condition" in their positive prompt (e.g. condition: object_in_scene=car).

I think leaving the checkpoint for what it is and only seeing it's output as RAW data, which can then be refined by a LLM (like ELLA) with a "LLM programmable custom node" between the checkpoint and the Ksampler, would be the best way.

Just some thoughts, while thinking out of the box.

@MedleMedler
Copy link
Author

MedleMedler commented Sep 7, 2024

GripTape Display Text changes from size when there is more output and text doesn't fit in the window. A scroll bar would be better and GripTape Display Text doesn't grow larger, disturbing tight layouts.

@MedleMedler
Copy link
Author

MedleMedler commented Sep 9, 2024

I have to say that I also did not succeed in training the model to become consistent in it's output.

But I found the Reflection 70B Model who now leads almost all bank marks of all models (including the 405B models).
They say on their website that they give a llama model a special set of rules, so the model learned to think different with more consistent output. (I am now actually merging the two models, resulting in 2 merged models with each it's own original architecture).

This opened my world, because I am starting to realise, that you can train the model with griptape nodes in any possible way. Not only training by rules, but also Fine-tuning, RAG, Transfer Learning, Zero-Shot Learning, Reinforcement Learning, Knowledge Distillation, Multi-Task Learning, Adversarial Training, you name it!

So I now integrated the rule of the Reflection model (https://huggingface.co/mattshumer/Reflection-Llama-3.1-70B) and I also added qwen2-math and made a connection between hermes3 and qwen2. At this moment they are learning each other their skills and at the same time cooperating as a team to succeed in a FileManager task.

Sometimes they can reach the fileManager, sometimes not and I get this error message in my cmd shell: WARNING: DreamInputText.IS_CHANGED() got an unexpected keyword argument 'value'

I now discovered that models can be trained in comfyui how ever you like. The rules input of the create agent is a system info input, where you can give the LLM any command, even to change it's own architecture.

Don't know where this leads to, but I am sure having fun :-)

@MedleMedler
Copy link
Author

MedleMedler commented Sep 9, 2024

I first trained hermes in improving its consistency in format output and in correct output self and it improved itself significantly (but still not enough).
After one night (7000 generations) I connected qwen2-math and said hermes that it should learn qwen2 everything he has learned. There goes a lot of data between the agent connections. Hermes said that transferred his knowledge in steps of a 1000 generations . In less then 10 steps he had learned qwen2 everything. After that qwen2 learned much quicker then hermes3 learned and has now more consistency in it's output after a few hundred generations. At this moment qwen2 became better then hermes3.

What also is a huge benefit, is that I can compare them in how they act in the same situation and where they are good at. I now would call qwen2 an engineer and hermes3 more a general manager (better in communication).
Also funny to see how they learn from each other, taking over each others skills.

@MedleMedler
Copy link
Author

MedleMedler commented Sep 9, 2024

I now notice that the rules are very sensitive. I have hermes3 producing correct answers (give a list of file names in a certain folder) and as soon as I connect rules, he gets lost and can't give correct answer.
Do not know if he does this with any rule, I am testing this now.

Up till now I have mixed results with the filemanager functioning. So now and then it works, so now and then not.

@MedleMedler
Copy link
Author

MedleMedler commented Sep 10, 2024

Jason, the more I test GripTape nodes, the more I start getting confused.

Even the most simple set up, is giving unstable results. Here are 3 example and I only changed the order of the rules.
afbeelding
afbeelding
afbeelding

Even using a space at the end of the rule, changes the outcome.
afbeelding
afbeelding
afbeelding
afbeelding

Even if you have a certain output, then I change something (and do a generation), and then I change it back (exactly how it was) and do a generation again, will change the output.
afbeelding
afbeelding

This is totally unworkable

@shhlife
Copy link
Collaborator

shhlife commented Sep 10, 2024 via email

@MedleMedler
Copy link
Author

I am glad I can be of assistance. I will be more glad if you can make this stable. :-)

Something else: qwen2-math cannot use the filemanager, when I try qwen2 says that it does not support the filemanager.
Could have something to do with this blog article of Ollama about tools.
https://ollama.com/blog/tool-support.
Since there is tool-support they added tags on there download website, showing every model if it supports tools, vision and/or code.

@shhlife
Copy link
Collaborator

shhlife commented Sep 10, 2024

This morning i wanted to test and see if the issue was with the model vs Griptape, and found that using gpt-4o the order of rules had no impact on the output. This makes me think that the challenge is really making sure the model is powerful enough to handle things.

image

@shhlife
Copy link
Collaborator

shhlife commented Sep 10, 2024

Continuing this exploration - hermes was a bit more consistent - but it didn't understand all the rules.

image

I tried a few others - mistral doesn't work with tools, even reflection which is brand new doesn't work with them.

The bigger models: claude-sonnet 3.5 works, cohere command-r-plus, gemini 1.5-pro all worked.

@MedleMedler
Copy link
Author

That's actually good news Jason. Open source models are becoming better every month, it's just a matter of time.

So we now found out it is not only, Open source ai is not yet able, to handle more complex Gripetape Workflows (concerning the order of making decisions); but also interpreting rules.

@shhlife
Copy link
Collaborator

shhlife commented Sep 11, 2024

Totally. :)

Btw - @MedleMedler was wondering if we could continue the conversation in Discord? That might make it more visible to other users, and keep the GitHub issues for specific bugs/feature requests we can close out. :)

I created a thread - here's the link to it: https://discordapp.com/channels/1096466116672487547/1250191293439676526/1283476144821374987

@shhlife shhlife closed this as completed Sep 11, 2024
@MedleMedler
Copy link
Author

Sure.

I just installed discord, if I browse that link I come in the general "Openart Dev" channel, but not in that specific thread.
But I did send you a friend request.

@shhlife
Copy link
Collaborator

shhlife commented Sep 12, 2024

Got the request! try this link to join the channel: https://discord.gg/NKVPWwBq

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants