Skip to content

Releases: lastmile-ai/aiconfig

v1.1.32

18 Mar 19:13
Compare
Choose a tag to compare

Changelog

Last PR included in this release: #1468

(2024-03-18) Python Version 1.1.32, NPM Version 1.1.15

Features

  • editor: Added UI for updating global model settings in AIConfig files (#1441)
  • vscode: Added commands to allow creating an empty AIConfig in addition to the example starter file (#1448)

Bug Fixes / Tasks

  • python-sdk: Removed test dependencies from the python-aiconfig package (#1463)
  • python-dev: Added Python auto-formatting check as a Github action to prevent unformatted files from merging into the repository (#1458)
  • typescript-dev: Specified jest to ignore dist files that get generated from running yarn so that automated tests do not incorrectly fail (#1466)

Documentation

  • [update] Added keywords and categories to the VS Code extension, making it easier to find (#1430)
  • [update] Removed erroneous await statement for loading AIConfig file in Gradio Notebook docs (#1435) → thanks @Anikait10 !
  • [update] Removed spaces between Github README badges to remove visible underscores (#1446) → thanks @jonathanagustin !

v1.1.31

11 Mar 20:38
Compare
Choose a tag to compare

Changelog

(2024-03-11) Python Version 1.1.31, NPM Version 1.1.14

Last PR included in this release: #1426

Features

  • python-sdk: Added OpenAIVisionParser to core model parsers, allowing integrations with OpenAI chat/vision models and adding gpt-4-vision-preview as a core model parser (#1416, #1417)
  • editor: Added model schema and prompt input formatting for GPT-4 vision (#1397)
  • extension: Created extension for Groq inference (#1402)

Bug Fixes / Tasks

  • python-sdk: Unpinned openai dependency and updated to 1.13.3 (#1415)
  • vscode: Removed check for .env file path needing to be a parent of user’s VS Code workspace, allowing users to specify an .env file that they can define anywhere (#1398)

Documentation

  • [new] Created README and cookbook to show how to use the Groq inference extension (#1405, #1402)
  • [updated] Removed warning text from Gradio Notebooks docs saying that Gradio SDK needs to be <= v4.16.0 because that issue is now resolved and we can now use the latest Gradio SDK versions (#1421)

v1.1.29

05 Mar 19:22
Compare
Choose a tag to compare

Changelog

(2024-03-05) Python Version 1.1.29, NPM Version 1.1.13

Last PR included in this release: #1401

Features

  • vscode: Enabled find widget (CMD/CTRL + F) in AIConfig editor webviews (#1369)
  • editor: Added input model schema for Hugging Face Visual Question Answering tasks (#1396)
  • editor: Set the environment variables in an .env file that gets saved into the VS Code configuration settings and refreshed during the current session (#1390)

Bug Fixes / Tasks

  • vscode: Fixed issue where autosaving was causing outputs to disappear and prompt inputs to lose focus when typing (#1380)
  • vscode: Updated new/untitled AIConfig file flow to follow regular new/untitled file flow in VS Code, prompting for file name on first save (#1351)
  • vscode: Used untitled name instead of first line contents for untitled file tab name (#1354)
  • vscode: Removed surfacing ‘Error updating aiconfig server’ message when closing untitled AIConfig files (#1352)
  • editor: Fixed an issue where readonly rendering of prompt settings was causing the page rendering to fail (#1358)
  • editor: Fixed default cell styles when no mode or themeOverride is specified (#1388)
  • vscode: Reran extension server to re-read environment variables after they’re been updated (#1376)

v1.1.28

27 Feb 22:00
Compare
Choose a tag to compare

Changelog

(2024-02-27) Python Version 1.1.28, NPM Version 1.1.12

Last PR included in this release: #1379

Features

  • vscode: Supported thorough Python interpreter validation checks, ensuring that pip and Python are installed, are the correct version, and that dependencies get installed when viewing an AIConfig file (#1262, #1294, #1304, #1313, #1328, #1347, #1348)
  • vscode: Supported restarting the active editor server via AIConfig: Restart Active Editor Server command. This command is useful to restart the server if it’s experiencing issues (#1319, #1349)
  • vscode: Created walkthrough that opens when extension is first installed, as well as the command AIConfig: Welcome to open it (#1316, #1343, #1374)
  • vscode: Created AIConfig: Submit Feedback command and button which redirects to our issues page on Github (#1362)
  • editor: Supported updating the editor client AIConfig state via the aiconfig prop, allowing the component to be used as a controlled component with external AIConfig state (#1261)
  • editor: Added an input model schema for Gemini (#1023)
  • editor: Moved Gemini to the third position on model list (#1353)
  • vscode: Created AIConfig: Set API Keys command to help guide users on setting api keys in a .env file that’s accessible by their VS Code workspace so they can run inference on remote models (#1283, #1285, #1300)
  • vscode: Added datadog logging and telemetry (#1302)

Bug Fixes / Tasks

  • vscode: Rendered extension editor as read-only until the server is ready or if the server is in a bad state (#1260, #1282, #1306, #1308)
  • editor: Avoided polling server status heartbeat when in read-only mode (#1325)
  • vscode: Used the default text editor diff view for viewing .aiconfig.[json/yaml/yml] file changes (#1268)
  • vscode: Skipped updating custom editor webview if there are no document content changes, preventing an issue where editor inputs were being remounted while typing (#1279)
  • vscode: Fixed a new/untitled AIConfig flow to only open a single tab and prompt for the new file name on save (#1351)

Documentation

v1.1.26

20 Feb 17:26
Compare
Choose a tag to compare

Changelog

(2024-02-20) Python Version 1.1.26, NPM Version 1.1.10

Last PR included in this release: #1264

Features

  • editor: Added support for modifying general prompt metadata, such as remember_chat_context, below model settings (#1205)
  • editor: Added logging events for share and download button clicks as well as any actions that edit the config (#1217, #1220)
  • extensions: Created conversational model parser to Hugging Face remote inference extension and added input model schema to editor client (#1229, #1230)

Bug Fixes / Tasks

  • editor: Updated ‘model’ value in model settings to clear when the model for a prompt (which can include general group of models such as Hugging Face Tasks which require the model field to specify a specific model name) is updated (#1245, #1257)
  • extensions: Set default model names for the Hugging Face remote inference model parsers for Summarization, Translation, Automatic Speech Recognition and Text-to-Speech tasks (#1246, #1221)
  • gradio-notebook: Fixed styles for checkboxes, markdown links, loading spinners and output lists, as well as general cleanup to buttons and input sizing (#1248, #1249, #1250, #1251, #1252, #1231)
  • python-sdk: Fixed dependency issue to no longer pin pydantic to 2.4.2 so that aiconfig-editor can be compatible with other libraries (#1225)

Documentation

  • [updated] Added new content to Gradio Notebooks documentation, including 5-mins tutorial video, local model support, more streamlined content format, and warnings for discovered issues with Gradio SDK version (#1247, #1234, #1243, #1238)

1.1.22

12 Feb 16:55
1062085
Compare
Choose a tag to compare

Changelog

(2024-02-12) Python Version 1.1.22, NPM Version 1.1.9

Features

  • vscode: Now utilizes the user's Python interpreter in the VS Code environment when installing dependencies for the AIConfig Editor extension. PR #1151
  • vscode: Added a command for opening an AIConfig file directly. PR #1164
  • vscode: Added a VS Code command for displaying a Welcome Page on how to use the extension effectively. PR #1194

Bug Fixes / Tasks

  • Python SDK:
    • AIConfig Format Support: Added support for AIConfig format issue for chats starting with an assistant (AI) message by making the initial prompt input empty. PR #1158
    • Dependency Management: Pinned google-generativeai module version to >=0.3.1 in requirements.txt files. PR #1171
    • Python Version Requirement: Defined all pyproject.toml files to require Python version >= 3.10. PR #1146
  • VS Code:
    • Extension Dependencies: Removed the Hugging Face extension from VS Code extension dependencies. PR #1167
    • Editor Component Theming: Fixed color scheming in the AIConfig editor component to match VS Code settings. PR #1168, PR #1176
    • Share Command Fix: Fixed an issue where the Share command was not working for unsigned AWS S3 credentials. PR #1213
    • Notification Issue: Fixed an issue where a notification, “Failed to start aiconfig server,” would show when closing a config with unsaved changes. PR #1201

Documentation

  • Tutorials and Guides:
    • Created a getting-started tutorial for Gradio Notebooks. Documentation
    • Created a cookbook for RAG with model-graded evaluation. PR #1169, PR #1200

1.1.20

07 Feb 00:50
Compare
Choose a tag to compare

Changelog

(2024-02-06) Python Version 1.1.20, NPM Version 1.1.9

Last PR included in this release: #1153

Features

  • python-sdk: Updated required python version from >=3.7 → >=3.10 (#1146). This is a breaking change if you were previously using <=3.9, so please update your python version if needed
  • python-sdk: Removed explicit authorization check in Azure model parser client (#1080). This is a breaking change if you were previously setting the api key with client.api_key. You must now set the AZURE_OPENAI_KEY, AZURE_OPENAI_ENDPOINT, OPENAI_API_VERSION and OPENAI_API_KEY variables in your environment
  • editor: Built keyboard shortcut to run prompt from prompt input textbox using CTRL + Enter or SHIFT + Enter (#1135)
  • editor: Supported Markdown rendering for AIConfig description field (#1094)
    [vscode extension] Created VS Code extension for AIConfig Editor (#1075)
  • python-sdk: Created methods load_json and load_yaml (#1057)
    [vscode extension] Created to_string SDK method and server endpoint for vscode extension and local editor (#1058, #1059)
  • editor: Added ability to override Mantine’s showNotification function with a custom callback: (#1030). Will be used in VS Code extension to use VS Code’s built-in notification framework

Bug Fixes / Tasks

  • python-sdk: Fixed event loop crash caused by re-running the Gemini API a 2nd time (#1139)
  • editor: Added Download button to read-only mode (#1071)
  • python-sdk: Added logic to register models and model parsers during create() command, not just load() (#1078)
  • editor: Fixed the width for the title, description, global parameters and add prompt components to match size of the rest of the prompt container components (#1077, #1081, #1124)
  • editor: Clarified error message when in JSON-editor mode for prompt inputs and switching between models, including how to fix it by toggling off JSON-editor mode (#1118)
  • editor: Placed + button into it’s own row to ensure it does not float above the parameters key-value pairs (#1084)
  • editor: Limited the prompt input settings height to match the size of the prompt input so it is not too large (#1051)
  • editor: Added prompt input settings schema for local Hugging Face task names (#1110)
  • editor: Removed Save button if no callback is implemented for it (#1123)
  • python-sdk: Exported missing default model parsers (#1112)
  • editor: Created separate prompt input schemas for Dall-E 2 vs. Dall-E 3 (#1138)

Documentation

  • [new] Created Azure OpenAI and Claude Bedrock cookbook tutorials (#1088)
  • [update] Changed “Share” button to “Share Notebook” or “Share Workbook” (#1148)
  • [update] Fixed broken links in RAG cookbook tutorial (#1122)

v1.1.18

07 Feb 00:25
Compare
Choose a tag to compare

Changelog

(2024-01-30) Python Version 1.1.18, NPM Version 1.1.8

Last PR included in this release: #1060

Features

  • python-sdk: Created Claude model parser for Bedrock (AWS) library. Added it to core model parsers in python-aiconfig (#1039)
  • python-sdk: Created variant of Open AI model parser class which uses Azure endpoints. Added it to core model parsers in python-aiconfig (#1034)
  • extension: Created model parsers for the following HuggingFace tasks, leveraging the HuggingFace remote inference client. Added them to aiconfig-extension-hugging-face
    • Automatic Speech Recognition (#1020)
    • Image-to-Text (#1018)
    • Summarization (#993)
    • Text-to-Image (#1009)
    • Text-to-Speech (#1015)
    • Translation (#1004)
  • python-sdk: Moved Gemini model parser to the main python-aiconfig package. aiconfig-extension-gemini is now deprecated (#987)
  • editor: Added Share button, which implements a callback to return a URL redirect to render a read-only version of the AIConfig instance (#1049). This will be used for Gradio Notebooks so Hugging Face space viewers can share their AIConfig session with others. We will have more details on this when it launches in upcoming weeks!
  • editor: Added Download button, which implements a callback to download existing AIConfig session to a local file (#1061). Like the Share button, this will be implemented for Gradio Notebooks on HuggingFace spaces
  • editor: Defined AIConfigEditor prop for setting light/dark mode UI theme (#1063)
  • editor: Added prompt input settings schemas for Hugging Face remote inference task names and Claude Bedrock (#1029, #1050)

Bug Fixes / Tasks

  • python-sdk: Fixed bug where we were not resolving parameters that referenced earlier prompts if those referenced prompts contained non-text input(s) or output(s) (#1065)
  • python-sdk: Refactored OpenAI model parser to use client object instead of directly updating api_key, enabling us create OpenAI Azure (#999)
  • editor: Disabled interactions for prompt name, model selector, and model settings while in read-only mode (#1027, #1028)
  • editor: Hardcode default models for remote inference HuggingFace tasks. Users can still edit the model they want to use, but they aren’t required to define them themselves (#1048)
  • editor: Set default max_new_tokens value from 20 → 400 for the HuggingFace remote inference Text Generation model parser (#1047)
  • python-sdk: Remove unused mocks from serialize() tests (#1064)

Documentation

  • [new] Created cookbook for Retrieval Augmented Generation (RAG) with MongoDB Vector Search (#1011)
  • [updated] Fixed typos in the aiconfig-extension-llama-guard extension requirements.txt file and also improved cookbook (#998)

v1.1.15

25 Jan 20:45
Compare
Choose a tag to compare

Changelog

(2024-01-23) Python Version 1.1.15, NPM Version 1.1.7

Last PR included in this release: #995

Features

  • sdk: Updated input attachments with AttachmentDataWithStringValue type to distinguish the data representation ‘kind’ (file_uri or base64) (#929). Please note that this can break existing SDK calls for model parsers that use non-text inputs
  • editor: Added telemetry data to log editor usage. Users can opt-out of telemetry by setting allow_usage_data_sharing: False in the .aiconfigrc runtime configuration file (#869, #899, #946)
  • editor: Added CLI rage command so users can submit bug reports (#870)
  • editor: Changed streaming format to be output chunks for the running prompt instead of entire AIConfig (#896)
  • editor: Disabled run button on other prompts if a prompt is currently running (#907)
  • editor: Made callback handler props optional and no-op if not included (#941)
  • editor: Added mode prop to customize UI themes on client, as well as match user dark/light mode system preferences (#950, #966)
  • editor: Added read-only mode where editing of AIConfig is disabled (#916, #935, #936, #939, #967, #961, #962)
  • eval: Generalized params to take in arbitrary dict instead of list of arguments (#951)
  • eval: Created @metric decorator to make defining metrics and adding tests easier by only needing to define the evaluation metric implementation inside the inner function (#988)
  • python-sdk: Refactored delete_output to set outputs attribute of Prompt to None rather than an empty list (#811)

Bug Fixes / Tasks

  • editor: Refactored run prompt server implementation to use stop_streaming, output_chunk, aiconfig_chunk, and aiconfig so server can more explicitly pass data to client (#914, #911)
  • editor: Split RUN_PROMPT event on client into RUN_PROMPT_START, RUN_PROMPT_CANCEL, RUN_PROMPT_SUCCESS, and RUN_PROMPT_ERROR (#925, #922, #924)
  • editor: Rearranged default model ordering to be more user-friendly (#994)
  • editor: Centered the Add Prompt button and fixed styling (#912, #953)
  • editor: Fixed an issue where changing the model for a prompt resulted in the model settings being cleared; now they will persist (#964)
  • editor: Cleared outputs when first clicking the run button in order to make it clearer that new outputs are created (#969)
  • editor: Fixed bug to display array objects in model input settings properly (#902)
  • python-sdk: Fixed issue where we were referencing PIL.Image as a type instead of a module in the HuggingFace image_2_text.py model parser (#970)
  • editor: Connected HuggingFace model parser tasks names to schema input renderers (#900)
  • editor: Fixed float model settings schema renderer to number (#989)

Documentation

  • [new] Added docs page for AIConfig Editor (#876, #947)
  • [updated] Renamed “variables” to “parameters” to make it less confusing (#968)
  • [updated] Updated Getting Started page with quickstart section, and more detailed instructions for adding API keys (#956, #895)

v1.1.12

11 Jan 18:35
Compare
Choose a tag to compare

Changelog

(2024-01-11) Python Version 1.1.12, NPM Version 1.1.5

We built an AIConfig Editor which is like VSCode + Jupyter notebooks for AIConfig files! You can edit the config prompts, parameters, settings, and most importantly, run them for generating outputs. Source control your AIConfig files by clearing outputs and saving. It’s the most convenient way to work with Generative AI models through a local, user interface. See the README to learn more on how to use it!

Editor Capabilities (see linked PRs for screenshots and videos)

  • Add and delete prompts (#682, #665)
  • Select prompt model and model settings with easy-to-read descriptions (#707, #760)
  • Modify local and global parameters (#673)
  • Run prompts with streaming or non-streaming outputs (#806)
  • Cancel inference runs mid-execution (#789)
  • Modify name and description of AIConfig (#682)
  • Render input and outputs as text, image, or audio format (#744, #834)
  • View prompt input, output, model settings in both regular UI display or purely in raw JSON format (#686, #656, #757)
  • Copy and clear prompt output results (#656, #791)
  • Autosave every 15s, or press (CTRL/CMD) + S or Save button to do it manually (#734, #735)
  • Edit on existing AIConfig file or create a new one if not specified (#697)
  • Run multiple editor instances simultaneously (#624)
  • Error handling for malformed input + settings data, unexpected outputs, and heartbeat status when server has disconnected (#799, #803, #762)
  • Specify explicit model names to use for generic HuggingFace model parsers tasks (#850)

Features

  • sdk: Schematized prompt OutputData format to be of type string, OutputDataWithStringValue, or OutputDataWithToolCallsValue (#636). Please note that this can break existing SDK calls
  • extensions: Created 5 new HuggingFace local transformers: text-to-speech, image-to-text, automatic speech recognition, text summarization, & text translation (#793, #821, #780, #740, #753)
  • sdk: Created Anyscale model parser and cookbook to demonstrate how to use it (#730, #746)
  • python-sdk: Explicitly set model in completion params for several model parsers (#783)
  • extensions: Refactored HuggingFace model parsers to use default model for pipeline transformer if model is not provided (#863, #879)
  • python-sdk: Made get_api_key_from_environment non-required and able to return nullable, wrapping it around Result-Ok (#772, #787)
  • python-sdk: Created get_parameters method (#668)
  • python-sdk: Added exception handling for add_output method (#687)
  • sdk: Changed run output type to be list[Output] instead of Output (#617, #618)
  • extensions: Refactored HuggingFace text to image model parser response data into a single object (#805)
  • extensions: Renamed python-aiconfig-llama to aiconfig-extension-llama (#607)

Bug Fixes / Tasks

  • python-sdk: Fixed get_prompt_template() issue for non-text prompt inputs (#866)
  • python-sdk: Fixed core HuggingFace library issue where response type was not a string (#769)
  • python-sdk: Fixed bug by adding kwargs to ParameterizedModelParser (#882)
  • python-sdk: Added automated tests for add_output() method (#687)
  • python-sdk: Updated set_parameters() to work if parameters haven’t been defined already (#670)
  • python-sdk: Removed callback_manager argument from run method (#886)
  • extensions: Removed extra python dir from aiconfig-extension-llama-guard (#653)
  • python-sdk: Removed unused model-ids from OpenAI model parser (#729)

Documentation