-
Notifications
You must be signed in to change notification settings - Fork 11
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Still Anthropic API key error #82
Comments
Hi! I have a fix for this coming tomorrow that lets you choose a different embedding driver, just doing some final testing and then I’ll release it. :) |
Okay, I've just pushed the latest update. Please check it out and let me know what you think! |
Hi Jason thank you very much. I will have a look at it today and give you feedback. :-) |
Ok, I get this message: Seams like it is working fine. As soon as I am up and running, I will let you know. |
Cool - thank you very much! |
Your welcome Jason. I would like to become involved with Griptape. This is a diamond in the rough. Griptape could make Confyui a platform that connects everything wit anything. The key to unlock Confyui (and make it "Comfyui Everything") is more tools, tools and tools. Your file management tool is a great step in the good direction. But what about a file manger tool plus, (e.g. for moving, copying and deleting files). With File Manager Plus, you can make your Griptape Agent a database manager, who can organise, rearrange files, asses files (e.g.: delete all images below a certain aesthetic image quality based on a set of rules (E.g. to automate your back end office for all the generated images)). The sky is the limit Jason. Griptape could make Confyui a platform where all disciplines merge together, giving everyone unprecedented control in building their own sophisticated agent workflows (merged with all other Confyui nodes) for any business, any art, any specialism, any profession and/or any field. I assume you have discord, where one can have more in depth conversations about Griptape? |
Thanks for the feedback! I'm really enjoying the response people are having to the integration of the Framework into a system like ComfyUI - it's been amazing. :) We do have a Discord, you can get to it here from ComfyUI by clicking with the Right Mouse Button and going to the Discord menu: Or, you can just go here: https://discord.gg/griptape |
Hi Jason. I am trying Gripetape out. First of all, I am learning a lot about the LLM's (chatbots) to get a good grasp of what is possible and what not. It is difficult to give good feedback, if you do not know how ai's work through Griptape in depth and where Griptape stands in usability in general. But for now I can share this:
|
Another one: When I use ChatGTP 4o (with its configuration model connected to the "create agent module"), ChatGTP runs every generation, although I have set "control after generate" option to fixed. This is un practical because the ai every time the ai generates a new answer, losing the old one. It would be wonderful if you have long connected forks with several "create agent modules" (every agent connected with his own configuration model) and you can set every agent to "fixed", " or "random" (with each agent his own seed). |
Hi @MedleMedler Yes, I've been noticing the issue with Ollama as well. I'm finding that llama3 is working more consistently than 3.1 - which is odd. It's also having issues with Tools - I've added a ticket to track that internally. As for the prompt model field, I just pushed an update this morning that should hopefully solve the model missing from the input - please let me know after your latest update if that fixes it. With ChatGPT4 getting overloaded - This is very frustrating, I know. I'm hopeful they'll resolve the issues soon! you can always try going to 4o-mini, 4, or 3.5 to see if that takes care of things for a little while? Also, I set the default control_after_generate to fixed now for all config nodes - this will hopefully solve the re-generation problem! Please take a look and let me know if these fixes work for you. cheers, |
Hi, |
Hi Jason, I have been very busy (and will be for the next period). As soon I am able to test it, I will let you know. :-) |
@yudhisteer you can put the .env file in your base ComfyUI directory. Or, you're welcome to use the @MedleMedler - I'm closing the ticket for now, but please let me know when you're able to test! |
Hi Jason, I finally had some time to evaluate:
I would like to advise you to take this with you in your next youtube videos. :-) I will keep contact using your Discord page in future and am curious how Griptape Confyui will develop. :-) |
I forgot to mention, that I don't know if hermes3 is intelligent enough to handle complex tasks (and dito GripTape workflows) like chatgtp can and (Llama 3.1 not). It wouldn't surprise me if hermes3 can and there will be no more need for separate GripTape workflows. |
Thanks for the notes! I’ll give hermes3 a try! You may be able to use the OpenAiCompatible nodes and ping their server.. https://docs.lambdalabs.com/on-demand-cloud/using-the-lambda-chat-completions-api If you can replicate the situations where you find ollama crashing, I’d love to try and solve it. as for multiple outputs, what kind of things did you have in mind? I’d be happy to explore alternate outputs and see if I can do anything dynamic on the nodes based on their evaluation! cheers :) |
Sure, will do! Let you know if I have a crash. Good plan! In that case actually, the dream node would be multiple string inputs (and outputs) and multiple int, number and float inputs (or an option to choose). And like way like you have now: where inputs/outputs build out, when you connect one. I see it like this: because of the "all round intelligence" of an ai chatbot, it is like magic in a box. Your Create agent node should be a "Stars Wars mothership", with endless ports to launch from and land on. In this case functionality creates opportunity and will create endless implementations |
So the agent does a run for every input, and has a resulting output per input - or based on their agents “work” it determines what the outputs should be? Woild you be able to draw a graph of what you are thinking? It can be pretty rough - just to help me visualize it :) |
Good question. Like a Star Wars mothership, it should have independent inputs and outputs. That gives you more control. E.g. when you are satisfied with one output, you can freeze it and keep it that way. Also save time in calculation, the ai only calculates when it's needed when can freeze certain outputs. Their are 4 kinds of input: string, number, integer and float. Number, integer and float, can have 1 input/output if have a choose option. In that choose option (scroll down menu) you can then. So that means that there are two types of input and output: string and number/ integer/float. Two kinds of inputs on the left side and two kinds of output on the right side. Minimum 2/2 IN/OUT and as soon as you connect an in/output, the next in/output appears. I think this will make compact multi functionality in one case. Do you have any thoughts what else could be integrated? |
You can also put an freeze/activate option also in those little scroll down menus |
Something else: a text editor node would improve the usability significantly. A text editor node would also expand the kind use and users. It would also attract writers, who want to build writing workflows (with automatic notes, etc.,etc.). |
And all other basic text editor functionality, like saving in different formats |
I don't you need a visual now, or do you? Let me know, I am happy to make one. I am also curious what your thoughts are? |
All are great ideas! A drawing would still be helpful - I want to make sure I’m imagining it the same way :) some things (like the wysiwyg text editor) may be better handled as a default comfyui widget - I’ll look around and see if one exists. in the meantime, keep the ideas coming! If you want to create new feature requests for each one that would help me keep track of them. :) |
Hermes told me that he has influence over its own temperature. He says that he can give temperature values to different topics. If this is true, then there should be multiple temperature fields/inputs in de config node. I have made a rule, saying hermes that the temperature for output format, must always be 0.1., regardless if I make the minimum temperature higher. Ideal would be, that you dynamically can set multi temperatures to diversify the temperature per topic (which you then tell the ai in the rule set) |
Ok so as default you give the LLM all the data for making the least mistakes. I am curious if this is also valid when you want to save the trained LLM. I wait for you to check saving a trained LLM as a copy, will embed the training into the LLM. |
Does off_prompt = OFF mean that the LLM does remember the whole conversation and not only the last question? I think not. I notice that off_prompt = OFF makes the LLM faster. Am I correct? |
off_pompt=OFF really only matters for the output of tools. the Agent has conversation memory, and each time it sends a message to the LLM it will include the history of it's interaction with the LLM - but each run of the LLM is "fresh".. so if you don't connect agent -> agent, it won't remember. Here's a screenshot showing: |
btw - having off_prompt=OFF does indeed go faster, because we don't need to query the memory for the LocalVectorStore. |
I changed from Llama 3.1 to Hermes 70B and immediately got this message: Prompt executed in 0.04 seconds Prompt executed in 0.04 seconds |
and that doesn't happen with the other hermes model, or llama3.1? |
Happens with all other models, even going back to llama 3.1. doesn't work anymore |
Even restarting Comfyui doesn't help. |
In all cases I also get this popup screen in Comfyui: Griptape Run: Agent The api_key client option must be set either by passing api_key to the client or by setting the OPENAI_API_KEY environment variableComfyUI Error ReportError Details
Stack Trace
System Information
Devices
Logs
|
This is really strange - it looks like there are a lot of errors with other libraries as well. if you create an OPENAI_API_KEY does it work as expected? |
At this moment I do not use Open Ai and did not use/create/do anything with the API Key. Everything was working fine with llama3.1; as soon as I switched to hermes3, I got above error messages. I can try to create an new OPENAI_API_KEY and look if this solves the issue. "it looks like there are a lot of errors with other libraries as well". |
I cleaned up Manager and looks like it is solved. |
excellent I like bugs like that. :) |
I have been playing around with making some nodes to create new versions of the llama3.1 models based on usage.. and while it's cool, I end up getting some very inconsistent results inside comfyUI. But here's where it's at so far.. in this example I'm creating an agent with llama3.1:latest. I give it a few rulesets and then run it a few times. Initially I had this all contained in a single node, but realized you might want to make some changes to the Modelfile, so I split it out into two. |
That means we now stated that models can be trained in Confyui, by just using them. Probably mixed training behaviour, could be solved by more training and or other techniques. How many training runs did you do, to let the new trained model change it's behaviour? This new insight will bring us new ways of thinking about it and how we could design new techniques and workflows to optimise this functionality. Probably there will be many ways, solutions and opportunities, we didn't think off yet. Also: the setting <off_prompt> ON or OFF now probably has influence on its training. Getting RAW data or REFINED data in its neural system, makes a difference in how the ai model stores this data into it's brain and also is trained. This is so exiting Jason: cool! |
At this moment you cannot communicate with the Confyui Checkpoints, like you can with a LLM. You can only communicate the prompt, but cannot meta communicate to the checkpoint to improve it's prompt skills (the LLM training the checkpoint). If that could somehow be possible, that you also also could meta communicate with a checkpoint through the LLM; then the LLM could improve the end result more, then only through prompt writing, like it does now. ELLA is such a technique to improve prompt interpretation of older checkpoint models. A LLM which can add such functionality (one way or the other), would be a dream coming true. It would revolutionise improvement of Confyui over it's whole axis in a total different way. This also could be a custom node, like ELLA, but controlled by the LLM. The advantage would be, that the more the LLM is trained in writing ELLA-like code, the more (e.g.) prompt recognition would improve. Another approach would be, looking how the checkpoint (and Ksampler) do function. Ideal would be a merge of a LLM and a checkpoint. Or a LLM with all functionality of a certain Confyui checkpoint in itself. The same for a kind of a custom node like ELLA, for improving checkpoint data before it goes into the sampler. Or maybe this all also can be achieved by a specialised Ksampler which is modified for a LLM, giving the LLM more control (a LLM-Ksampler). A chatbot told me, that there are special tokens which the Ksampler uses to communicate with the checkpoint (steps:, plms:, dual_guidance:, cfg_scale, etc.), but that there are also special tokens which are not used. Like meta:, config:, condition:,. I am not sure if this is true, but there were some users who had success with the extra command "meta" and "condition" in their positive prompt (e.g. condition: object_in_scene=car). I think leaving the checkpoint for what it is and only seeing it's output as RAW data, which can then be refined by a LLM (like ELLA) with a "LLM programmable custom node" between the checkpoint and the Ksampler, would be the best way. Just some thoughts, while thinking out of the box. |
GripTape Display Text changes from size when there is more output and text doesn't fit in the window. A scroll bar would be better and GripTape Display Text doesn't grow larger, disturbing tight layouts. |
I have to say that I also did not succeed in training the model to become consistent in it's output. But I found the Reflection 70B Model who now leads almost all bank marks of all models (including the 405B models). This opened my world, because I am starting to realise, that you can train the model with griptape nodes in any possible way. Not only training by rules, but also Fine-tuning, RAG, Transfer Learning, Zero-Shot Learning, Reinforcement Learning, Knowledge Distillation, Multi-Task Learning, Adversarial Training, you name it! So I now integrated the rule of the Reflection model (https://huggingface.co/mattshumer/Reflection-Llama-3.1-70B) and I also added qwen2-math and made a connection between hermes3 and qwen2. At this moment they are learning each other their skills and at the same time cooperating as a team to succeed in a FileManager task. Sometimes they can reach the fileManager, sometimes not and I get this error message in my cmd shell: WARNING: DreamInputText.IS_CHANGED() got an unexpected keyword argument 'value' I now discovered that models can be trained in comfyui how ever you like. The rules input of the create agent is a system info input, where you can give the LLM any command, even to change it's own architecture. Don't know where this leads to, but I am sure having fun :-) |
I first trained hermes in improving its consistency in format output and in correct output self and it improved itself significantly (but still not enough). What also is a huge benefit, is that I can compare them in how they act in the same situation and where they are good at. I now would call qwen2 an engineer and hermes3 more a general manager (better in communication). |
I now notice that the rules are very sensitive. I have hermes3 producing correct answers (give a list of file names in a certain folder) and as soon as I connect rules, he gets lost and can't give correct answer. Up till now I have mixed results with the filemanager functioning. So now and then it works, so now and then not. |
Thank you for these examples - I can test with them tomorrow with both
comfyui and plain griptape to find out where the problem is!
This will be super helpful!
…On Tue, 10 Sep 2024 at 8:10 PM, MedleMedler ***@***.***> wrote:
Jason, the more I test GripTape nodes, the more I start getting confused.
Even the most simple set up, is giving unstable results. Here are 3
example and I only changed the order of the rules.
afbeelding.png (view on web)
<https://github.com/user-attachments/assets/9d25a64b-0a41-42d7-994c-7005ee469fbe>
afbeelding.png (view on web)
<https://github.com/user-attachments/assets/8f0993af-e09e-47c7-81d6-bf933408394b>
afbeelding.png (view on web)
<https://github.com/user-attachments/assets/99d13111-e743-43d0-a9b5-e98be3d42f48>
Here just 3 examples, with 3 different results and only changing the order
of the rules.
But even not changing anything in the rules, and only changing from
hermes3 to Llama 3.1, will give different results.
The result are also unstable, the one time an unchanged configuration
gives you a certain output and the next time it can give you a total
different result.
Even using a space at the end of the rule, changes the outcome.
afbeelding.png (view on web)
<https://github.com/user-attachments/assets/590fa377-71f8-4937-8c59-dc01f3bd96bd>
afbeelding.png (view on web)
<https://github.com/user-attachments/assets/42053858-5dd1-4a8b-8352-01a387b82efe>
afbeelding.png (view on web)
<https://github.com/user-attachments/assets/ad04b9b3-c640-4cdf-895c-9f734a82e572>
afbeelding.png (view on web)
<https://github.com/user-attachments/assets/dc726046-8695-4a28-860b-fd9a65e22606>
Even if you have a certain output, then I change something (and do a
generation), and then I change it back (exactly how it was) will change the
output.
afbeelding.png (view on web)
<https://github.com/user-attachments/assets/bdc567cd-9f1f-4e14-b2d6-df42317fbc8b>
afbeelding.png (view on web)
<https://github.com/user-attachments/assets/6b496d43-8069-4d6c-a68b-11739cfdd96f>
This is totally onworkable
—
Reply to this email directly, view it on GitHub
<#82 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/ABJFWYMZYAEPKQI4QFTIUUDZV2SRHAVCNFSM6AAAAABLGWPYA2VHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDGMZZHE3DMNRYGE>
.
You are receiving this because you modified the open/close state.Message
ID: ***@***.***>
|
I am glad I can be of assistance. I will be more glad if you can make this stable. :-) Something else: qwen2-math cannot use the filemanager, when I try qwen2 says that it does not support the filemanager. |
Continuing this exploration - hermes was a bit more consistent - but it didn't understand all the rules. I tried a few others - mistral doesn't work with tools, even reflection which is brand new doesn't work with them. The bigger models: claude-sonnet 3.5 works, cohere command-r-plus, gemini 1.5-pro all worked. |
That's actually good news Jason. Open source models are becoming better every month, it's just a matter of time. So we now found out it is not only, Open source ai is not yet able, to handle more complex Gripetape Workflows (concerning the order of making decisions); but also interpreting rules. |
Totally. :) Btw - @MedleMedler was wondering if we could continue the conversation in Discord? That might make it more visible to other users, and keep the GitHub issues for specific bugs/feature requests we can close out. :) I created a thread - here's the link to it: https://discordapp.com/channels/1096466116672487547/1250191293439676526/1283476144821374987 |
Sure. I just installed discord, if I browse that link I come in the general "Openart Dev" channel, but not in that specific thread. |
Got the request! try this link to join the channel: https://discord.gg/NKVPWwBq |
I also get this error message, after exactly following your API instructions:
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Error occurred when executing Griptape Agent Config: Anthropic:
No API key provided. You can set your API key in code using 'voyageai.api_key = ', or set the environment variable VOYAGE_API_KEY=). If your API key is stored in a file, you can point the voyageai module at it with 'voyageai.api_key_path = ', or set the environment variable VOYAGE_API_KEY_PATH=. API keys can be generated in Voyage AI's dashboard (https://dash.voyageai.com).
File "F:!COMFYUI\ComfyUI_windows_portable\ComfyUI\execution.py", line 151, in recursive_execute
output_data, output_ui = get_output_data(obj, input_data_all)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
I red the previous message about Antropic API key and I now understand that putting the Anthropic API Key in the .env file, does not work anymore.?
Will you make a workaround, or are we stuck to use Voyage AI?
If we are stuck to Voyage AI, can you please make a tutorial how to exactly make this work with Voyage AI?
Maybe off topic, but do I understand correctly that you have to pay Anthrophic for generated ai data and now also have to pay Voyage AI for generated ai data that goes through their API embedding model?
Fur sure off topic (but worth mentioning) I already saw that Voyage Ai (and also Open Ai) only accepts credit cards. That's a big disappointment for everybody without credit card, they then also have pay credit card fees. Pay, pay, pay,... anyway.
The text was updated successfully, but these errors were encountered: