Releases: OpenInterpreter/open-interpreter
v0.1.6
Generator Update (Quick Fixes I)
What's Changed
- fix: stop overwriting boolean config values by @ericrallen in #508
- Update WINDOWS.md by @rsfutch77 in #523
- Fix ARM64 llama-cpp-python Install on Apple Silicon by @gavinmclelland in #505
- Broken empty message response by @blujus in #501
- fix crash on unknwon command on call to display help message by @mocy in #493
- Update get_relevant_procedures.py by @kubla in #492
New Contributors
- @ericrallen made their first contribution in #508
- @rsfutch77 made their first contribution in #523
- @gavinmclelland made their first contribution in #505
- @blujus made their first contribution in #501
- @mocy made their first contribution in #493
- @kubla made their first contribution in #492
Full Changelog: v0.1.5...v0.1.6
The Generator Update
Features
- Modular, generator-based foundation (rewrote entire codebase)
- Significantly easier to build Open Interpreter into your applications via
interpreter.chat(message)
(see JARVIS for example implementation) - Run
interpreter --config
to configureinterpreter
to run with any settings by default (set your default language model, system message, etc) - Run
interpreter --conversations
to resume conversations - Budget manager (thank you LiteLLM!) via
interpreter --max_budget 0.1
(sets max budget per session in USD) - Change the system message, temperature, max_tokens, etc. from the command line
- Central
/conversations
folder for persistent memory - New hosted language models (thank you LiteLLM!) like Claude, Google PaLM, Cohere, and more.
What's Changed
- Fix typo 'recieved'> 'received' by @merlinfrombelgium in #361
- Pull request template created by @TanmayDoesAI in #365
- docs: move pr template to .github folder by @jordanbtucker in #373
- chore: enhance .gitignore by @jordanbtucker in #374
- chore: add vscode debug support by @jordanbtucker in #375
- discard the / as command as it will block the Mac/Linux to load the file by @moming2k in #378
- Update interpreter.py for a typo error by @YUFEIFUT in #397
- Translated Open Interpreter README into Hindi by @zeelsheladiya in #417
- Add models to pull request template by @mak448a in #423
- Retry connecting to openai after hitting rate limit to fix #442 by @mathiasrw in #452
- Handle %load_message failure in interpreter.py by @richawo in #431
- add budget manager for api calls by @krrishdholakia in #316
- The Generator Update by @KillianLucas in #482
New Contributors
- @YUFEIFUT made their first contribution in #397
- @zeelsheladiya made their first contribution in #417
- @mak448a made their first contribution in #423
- @mathiasrw made their first contribution in #452
- @richawo made their first contribution in #431
- @krrishdholakia made their first contribution in #316
Full Changelog: v0.1.4...v0.1.5
v0.1.4
What's Changed
-
Add support for R language by @freestatman in #249
-
Feature: Implement and Document New Interactive Mode Commands by @moming2k in #302
-
Remove previous message and its responses from chat history with Undo-command. by @oliverpalonkorp in #273
-
Enable resume download from HF by @jerzydziewierz in #345
-
ui: Optimize welcome message by @codeacme17 in #257
-
feat: Add hints to Azure model by @codeacme17 in #237
-
docs: Upgrade issue templates by @jordanbtucker in #262
-
docs: Separate system versions into own fields by @jordanbtucker in #264
-
Docs: use x64 in WINDOWS.md and GPU.md by @jordanbtucker in #287
-
Fix using litellm.api_base, litellm.api_key, litellm.api_version by @ishaan-jaff in #284
-
Fix typo. by @Michael-Lfx in #292
-
fix(ui): Fix the display problem of welcome message by @codeacme17 in #270
-
Docs: Add security policy by @jordanbtucker in #266
-
Check disk space before downloading models by @michaelzdrav in #323
-
Update GPU.md by @metantonio in #335
-
remove duplicate import of inquirer library in get_hf_llm.py by @lalebot in #327
-
fix: merge os.environ with llama install env_vars by @jordanbtucker in #338
-
docs: move CONTRIBUTING to common path by @jordanbtucker in #350
-
Fix minor typo by @osanseviero in #248
New Contributors
- @freestatman made their first contribution in #249
- @osanseviero made their first contribution in #248
- @okisdev made their first contribution in #253
- @codeacme17 made their first contribution in #257
- @gijigae made their first contribution in #282
- @Michael-Lfx made their first contribution in #292
- @jjolly made their first contribution in #278
- @michaelzdrav made their first contribution in #323
- @metantonio made their first contribution in #335
- @lalebot made their first contribution in #327
- @jerzydziewierz made their first contribution in #345
Full Changelog: v0.1.3...v0.1.4
v0.1.3
What's Changed
- Quick fix for
--model tiiuae/falcon-180B
(redirect to GGUF version). - Quick fix for #247
Update pushed to pip
with just the fixes above. After that, I merged this commit, which will be in the next pip
version:
- Add support for R language, update instructions for package installation by @freestatman in #249
New Contributors
- @freestatman made their first contribution in #249
Full Changelog: v0.1.2...v0.1.3
v0.1.2
What's Changed
- docs: explain GPU support by @jordanbtucker in #102
- feat: add AZURE_API_KEY that falls back to OPENAI_API_KEY by @jordanbtucker in #135
- docs: explain Windows Code-Llama build requirements by @jordanbtucker in #138
- Created contribution guidelines by @TanmayDoesAI in #101
- docs: create issue templates by @jordanbtucker in #176
- moved all markdown files to a folder, updated the readme for the same… by @TanmayDoesAI in #182
- Fix download URL for CodeLlama 7B high quality model by @merlinfrombelgium in #181
- docs: add interpreter version to template by @jordanbtucker in #190
- docs: fix example version number for interpreter by @jordanbtucker in #191
- docs: add enhancement label to feature requests by @jordanbtucker in #192
- docs: prevent blank issues by @jordanbtucker in #195
- docs: provide issue template link by @jordanbtucker in #196
- Update README.md by @macterra in #197
- Create MACOS Documentation by @ihgalis in #177
- Add option to override Azure API type by @Taik in #189
- Feature: add cli environment variable by @moming2k in #157
- Update MACOS.md by @ihgalis in #215
- Falcon // Any 🤗 model via
--model meta/llama
by @KillianLucas in #213 - Update contributing.md with instructions on how to get local fork running by @oliverpalonkorp in #235
- remove redundant checks for apple silicon by @shubhe25p in #230
- Fix GPT 3.5 from failing to run commands by @Maclean-D in #96
New Contributors
- @jordanbtucker made their first contribution in #102
- @merlinfrombelgium made their first contribution in #181
- @macterra made their first contribution in #197
- @ihgalis made their first contribution in #177
- @Taik made their first contribution in #189
- @moming2k made their first contribution in #157
- @oliverpalonkorp made their first contribution in #235
- @shubhe25p made their first contribution in #230
- @Maclean-D made their first contribution in #96
Full Changelog: v0.1.1...v0.1.2
v0.1.1
What's Changed
- Added Azure support by @ifsheldon in #62
- CodeLlama improvements by @KillianLucas in #87
- Rate limit error fix
New Contributors
- @ifsheldon made their first contribution in #62
Full Changelog: v0.1.0...v0.1.1
v0.1.0
Open Interpreter v0.1.0
Open Interpreter lets LLMs run code locally. You can chat with Open Interpreter through a ChatGPT-like interface in your terminal by running $ interpreter
after installing.
- CodeLlama supported with
--local
, more models coming soon - Interpreters loaded for Python, Javascript, Shell, and Javascript
- Streaming chat in your terminal (thanks to Textualize/Rich!)
New Contributors
- @TanmayDoesAI made their first contribution in #25
Full Changelog: v0.0.297...v0.1.0
v0.0.297
What's Changed
- Windows CURL error fix by @KillianLucas in #23
- Fixed error where long conversations would hang forever (#5) by updating TokenTrim
New Contributors
- @eltociear made their first contribution in #28
Full Changelog: v0.0.296...v0.0.297
v0.0.296
Added Code Llama support.
Full Changelog: v0.0.295...v0.0.296
v0.0.295
What's Changed
- Better CLI messages
- (Experimental) Llama-2 support