Skip to content

Releases: jbexta/AgentPilot

Release 0.3.2.1

14 Sep 01:35
Compare
Choose a tag to compare

Whats new?

Model drop-down fields can now tweak the parameters of that field.
New provider architecture, still only 'litellm' provider is implemented.
New button to sync models to latest version (For now I only update popular providers so some may still be missing).

Tools finally integrated (new issue with numeric parameters)
Tool message bubble can edit it's parameters
Re-running tools creates a new branch
New message role 'result'

Blocks can now be nested
Added circular reference error for blocks (on execution)
Code blocks do not check for nested blocks, but other blocks can nest code blocks
Button to test a block from the blocks page

New page Envs
Custom python virtual-envs can be created and deleted, freeing tools from the limited provided packages.
Sync PyPi packages, install and remove them from venvs.
Environment variables can be set in Envs, where secrets can be stored for tools.

Fixed prompt block bug
Fixed allow editing messages with markdown bug
Fixed auto completion tab bug

New block type 'Metaprompt'
First metaprompt added is Claude prompt generator (while any model can be used, it works best with anthropic)
New "Magic wand" button added to message box input, this uses a metaprompt to enhances your prompt.
The enhancement should only be the text inside the tag, but it includes other tags, fix coming soon

Blocks and Tools pages can be pinned to the main sidebar (right click to Pin/Unpin)
Default chat model works now
Other fixes

New bugs

Issue on linux, creating venv does not install pip
Numeric tool parameters get stuck at -99999
When editing a previous message with markdown, to resend you have to press the resend button twice (because the first click makes the bubble lose focus, which blocks the event button click event)

Notes

Same version as below but appimage built with some libs causing issue on my machine
Environment variables set can be accessed by all other Envs (for now)
Local env is your own machine, this is not sandboxed at all, so it's important to trust and understand any code you run

Release 0.3.2

13 Sep 23:33
Compare
Choose a tag to compare

Whats new?

Model drop-down fields can now tweak the parameters of that field.
New provider architecture, still only 'litellm' provider is implemented.
New button to sync models to latest version (For now I only update popular providers so some may still be missing).

Tools finally integrated (new issue with numeric parameters)
Tool message bubble can edit it's parameters
Re-running tools creates a new branch
New message role 'result'

Blocks can now be nested
Added circular reference error for blocks (on execution)
Code blocks do not check for nested blocks, but other blocks can nest code blocks
Button to test a block from the blocks page

New page Envs
Custom python virtual-envs can be created and deleted, freeing tools from the limited provided packages.
Sync PyPi packages, install and remove them from venvs.
Environment variables can be set in Envs, where secrets can be stored for tools.

Fixed prompt block bug
Fixed allow editing messages with markdown bug
Fixed auto completion tab bug

New block type 'Metaprompt'
First metaprompt added is Claude prompt generator (while any model can be used, it works best with anthropic)
New "Magic wand" button added to message box input, this uses a metaprompt to enhances your prompt.
The enhancement should only be the text inside the tag, but it includes other tags, fix coming soon

Blocks and Tools pages can be pinned to the main sidebar (right click to Pin/Unpin)
Default chat model works now
Other fixes

New bugs

Issue on linux, creating venv does not install pip
Numeric tool parameters get stuck at -99999
When editing a previous message with markdown, to resend you have to press the resend button twice (because the first click makes the bubble lose focus, which blocks the event button click event)

Notes

Environment variables set can be accessed by all other Envs (for now)
Local env is your own machine, this is not sandboxed at all, so it's important to trust and understand any code you run

Release 0.3.1

12 Jul 18:56
Compare
Choose a tag to compare

DO NOT USE
Every Prompt block will be executed each time an OI agent is loaded, wasting tokens.

What's new?

  • Modified open interpreter with a dirty workaround to allow the python kernel to be used from an executable.
  • All code execution from executable works now using open interpreter.
  • Fixed orphaned model bug

NEW KNOWN ISSUES:

  • The linux release may not work on your machine because of a new dependency issue, you might need to build it yourself using the build.py script.
  • App becomes unresponsive for the first few seconds when the kernel launches. The kernel is launched whenever OpenInterpreter plugin is used or code is executed

Release 0.3.0

04 Jul 00:58
Compare
Choose a tag to compare

DO NOT USE
Every Prompt block will be executed each time an OI agent is loaded, wasting tokens.

It's been a while since the last release, this update might be a bit disappointing as it's not as complete as I wanted to get it. Most of my time has been spent on the architecture, allowing nested workflows, and dynamic GUI settings depending on the plugin.
Aswell as the below, I wanted to get vector stores, memory and files/images supported with this release, but they just aren't ready yet.

There's a few new bugs, but I'll get those fixed, I'd recommend not trying to make nested workflows yet, as it's not finished, but saving a multi-agent workflow as a single entity is fine. A nested workflow would be a multi-member workflow where any of the members is another multi-member workflow

I've had to strip out crewai for now because of a langchain dependency issue.

What's new?

Added anonymous telemetry, enabled by default, the only thing sent right now is an event when the app is started, to get an idea of user count.

Added member list to workflow
Button to disable autorun for granular execution
Added circular reference error
Button to show/hide hidden messages
New workflow components
User - Add your input mid workflow
Tool - Get the output of a tool
Members aligned vertically are run asynchronously
Added 'Waiting for ..' bar to groups

New Open Interpreter fully integrated
Added auto-run code secs field
Plugins can override the GUI settings (Agent & Workflow)
OpenAI Assistants are streamable and support branching chats
Added Plugins settings pages

Nested workflows (unfinished, be careful)
Added Sandboxes page (initially only local)
Tools can use any of the 9 languages that OI supports

Display theme presets
Providers & models update
Blocks can now be Text, Prompt or Code

Made fields optional ('max messages', 'max turns')
Max turns work with branching chats

NEW KNOWN ISSUES:
Changing the config of an OpenAI Assistant won't reload the assistant, for now close and reopen the chat.
Some others
Be careful using auto run code and open interpreter, any chat you open, if code is the last message it will start auto running, I'll add a flag to remember if the countdown has been stopped.
Logs are broken and need reimplementing.
Flickering when response is generating and scrolled up the page.
Sometimes the scroll position of the chat page jumps after response has finished.
Windows exe must have console visible or it affects the streams

Release 0.2.0

14 Mar 16:46
Compare
Choose a tag to compare

This is an early release of version 0.2, it isn't fully featured yet but fairly stable, new experimental features will be coming in the next few weeks.

What's new?

  • All pages and fields are now created procedurally from a schema. This is a new framework for the GUI for easier maintenance, readability and extensibility, and lays a foundation for a more complex GUI to be built.
  • Folders are enabled for agents, chats, tools and blocks allow for better organization.
  • New Tools and Files pages (Files not yet used by the agents and tools partially implemented)
  • Added preloaded messages as a way to teach your agents how to respond.
  • Full support for light themed displays
  • OpenAI assistant plugin (docs coming soon)
  • CrewAI plugin (docs coming soon)
  • API and model list update (New providers: Mistral, Groq and others, New models: Claude 3 and others)
  • Cosmetic changes
  • Other features / changes
  • Faster load times

Issues

  • Files aren't used by the agents yet, in the coming weeks will be integrated into native and oai assistants.
  • Only imported tools work right now, guides and documentation coming in the next few weeks.
  • Voice has been temporarily disabled
  • OpenAI assistants lose their 'instance_config' when config modified causing them to lose memory of the chat.
  • Tools use the controversial 'exec' temporarily until sandboxes are implemented, assume anyone with access to your database can execute code on your machine.
  • OpenInterpreter has been temporarily disabled because of a dependency issue, coming back soon

How to migrate your data to 0.2.0

Copy your old database (data.db) to the new application folder before you start the app.
Agents, chats and API keys are migrated, but anything else is not.

What next?

  • Fully implement files and tools
  • Define special behaviour for file types (images can be passed into any vision model)
  • Support sandboxed environments cloud & local.
  • Deeper integration of context plugins like CrewAI & autogen
  • Reimplement voice TTS and STT
  • Custom config pages for plugins
  • New workflow components
  • Rewrite workflow logic

Release 0.1.7

12 Jan 17:07
872f367
Compare
Choose a tag to compare

What's new?
(Same as 0.1.6, ammended release to temporarily disable a new feature)

New plug-in architecture
New Open-Interpreter (not fully working yet)
Fix auto title blocking main thread
Faster loading
Added Assistant API (no retrieval yet)
Fix for OpenGL issue (thanks to @mruderman)
Added API headers to organise the list of LLM models (thanks to @chymian)
Added button "Set member config to default" (In chat member settings)
Added button "Set all member configs to default" (In agent settings)

New issues / bugs:
When plugin settings are changed the context needs to be reloaded to take effect
Set member default buttons don't work with OAI Assistants API, or any custom plugin with instance settings

Release 0.1.6

10 Jan 12:06
Compare
Choose a tag to compare

What's new?

New plug-in architecture
New Open-Interpreter (not fully working yet)
Fix auto title blocking main thread
Faster loading
Added Assistant API (no retrieval yet)
Fix for OpenGL issue (thanks to @mruderman)
Added API headers to organise the list of LLM models (thanks to @chymian)
Added button "Set member config to default" (In chat member settings)
Added button "Set all member configs to default" (In agent settings)

New issues / bugs:
When plugin settings are changed the context needs to be reloaded to take effect

Release 0.1.5

13 Dec 04:44
Compare
Choose a tag to compare
  • Stable
  • Better style sheets for windows
  • Windows fix, window opens in bottom corner
  • Open Interpreter uses agent System Message
  • Fix auto title
  • Added auto title prompt + model settings
  • Added context title to chat
  • Added dev mode
  • Added button to fix all empty titles (in dev mode)
  • Re-enable branching (not stable yet but usable)
  • Fix stop generation button

Following versions will only be bugfixes if any found, until next major version 0.2.0 which will introduce RAG & Functions

Release 0.1.4

09 Dec 16:14
Compare
Choose a tag to compare
v0.1.4

update readme

Release 0.1.3

08 Dec 13:17
Compare
Choose a tag to compare
v0.1.3

stable, no branching