Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Allow creating (and deleting) your own Assistant & the ability to set the specific model that is being used #308

Closed
krschacht opened this issue Apr 26, 2024 · 9 comments · Fixed by #334
Assignees
Labels
enhancement New feature or request
Milestone

Comments

@krschacht
Copy link
Contributor

krschacht commented Apr 26, 2024

This way you can have custom instructions for brainstorming or coding and pick the right assistant.

For creating an assistant, we can add a button right at the bottom of the list of assistants. ChatGPT has an "explore GPTs" button but I think our button would just be "+ New" or something like that.

Clicking it takes you into settings/assistants/new and you get a basic form. The one thing to get right is turning the model text attribute into a drop-down.

Deleting an assistant is a bit trickier because conversations & messages are associated with an assistant. I think this means we need to "soft delete." I think that basically consists of:

  • Adding deleted_at column to assistants
  • Adding a Delete button beneath the assistant editing form (I think it's there and just commented out)
  • Updating the destroy action in settings/assistants_controller to perform a soft delete

After we do this, one important thing to check: when you view an old conversation which was associated with an assistant that now gets deleted, does it work? Notably, when you do this it attempts to highlight the assistant in the left side which is associated with this. That logic probably needs to be updated because it will be looking for a non-existent assistant. I think it could just default to highlighting the first assistant in the list instead. I'm not sure how tricky this might be.

While we're making this change, we should rename "instructions" to "Custom Instructions" within editing/new assistant form.

This idea came from: https://www.reddit.com/r/rails/comments/1cdxjb3/a_new_rails_hostedgpt_is_out_what_should_come_next/

@krschacht krschacht added this to the 0.7 milestone Apr 26, 2024
@krschacht krschacht added the enhancement New feature or request label Apr 30, 2024
@krschacht krschacht changed the title Allow creating (and deleting) your own Assistant Allow creating (and deleting) your own Assistant & the ability to set the specific model that is being used May 7, 2024
@krschacht
Copy link
Contributor Author

FYI @stephan-buckmaster is working on this one, just so no one else picks it up (github only lets me assign issues to other commenters)

@stephan-buckmaster
Copy link
Contributor

Last commit does soft-deletions. Some user-tests fail, where all its dependents are to be destroyed, but they are not anymore.

Can you take a look at these changes, before I look into covering all by tests, @krschacht

@krschacht
Copy link
Contributor Author

Can you take a look at these changes, before I look into covering all by tests, @krschacht

Hi @stephan-buckmaster sorry for the delay on this! But yes, absolutely happy to review it. I’ll be sure to give it a close look tomorrow. I’m excited to have this in! I’ve been working on greatly expanding the capabilities of assistants through the addition of tools, so it’s going to be great to have this foundation in place for managing assistants.

@stephan-buckmaster
Copy link
Contributor

stephan-buckmaster commented May 18, 2024 via email

@krschacht
Copy link
Contributor Author

@stephan-buckmaster I got about half way through it today. So far it's looking good! I'm making some tweaks as I go to save us some back and forth but it's just been little stuff: tightening HTML/CSS styling, changing some explicit references to IDs. I'll share some notes when I get through everything and push up to your branch.

@stephan-buckmaster
Copy link
Contributor

stephan-buckmaster commented May 19, 2024 via email

@krschacht
Copy link
Contributor Author

krschacht commented May 19, 2024

@stephan-buckmaster I finished going through and left some comments on the PR. I'm excited that you're working on making locally running models available! A couple quick thoughts on that:

  1. I was recently playing with Llama 3 on my Mac and discovered the Llamafiles file format is an incredibly easy way to download and run any LLM locally. I share just in case you haven't come across that. I learned about it from this blog post where he explains the options: https://simonwillison.net/2024/Apr/22/llama-3/
  2. With regards to supporting non-Anthropic and non-OpenAI APIs, I've been having some correspondence with the creator of (LangChain RB). I find it's name a little confusing, it's not really trying to be langchain. But what's notable about the project is that he's created an Assistant abstraction on top of ruby-openai and ruby-anthropic. And, notably, he already has support for a lot of others.

I was assuming that we'd eventually unify the AIBackend into a single interface. Right now backend/openai and backend/anthropic are incredibly similar, and really I should have created a base class that these both inherited from and then they can just override whatever subtle differences.

But if you're the first one to start looking into adding additional LLM backends, it might be worth taking a look at langchainrb to decide if we should pull it in. Notably, if we added this gem we'd remove our explicit ruby-openai and ruby-anthropic from our gemfile since langchainrb already includes both of those.

I haven't dug in enough to form an opinion on whether this gem is worth adding solely for it's abstraction of LLMs. It does a lot of other stuff which we're not using at this time, so it may be overkill. I'm just throwing it out there in case it helps you in your effort.

@stephan-buckmaster
Copy link
Contributor

Yes, I came across that gem too, I think that will be good for hostedgpt. But for now, being able to configure openai for remote servers will be fine also. So I would suggest to make changes in such a way that they can be expanded later

@krschacht
Copy link
Contributor Author

I recently came across another project helping to run local models. I haven’t dug in, but sharing here in case it’s useful: https://llamaedge.com/

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

Successfully merging a pull request may close this issue.

2 participants