Releases: svilupp/PromptingTools.jl
Releases · svilupp/PromptingTools.jl
v0.7.0
PromptingTools v0.7.0
Added
- Added new Experimental sub-module AgentTools introducing
AICall
(incl.AIGenerate
), andAICodeFixer
structs. The AICall struct provides a "lazy" wrapper for ai* functions, enabling efficient and flexible AI interactions and building Agentic workflows. - Added the first AI Agent:
AICodeFixer
which iteratively analyzes and improves any code provided by a LLM by evaluating it in a sandbox. It allows a lot of customization (templated responses, feedback function, etc.) See?AICodeFixer
for more information on usage and?aicodefixer_feedback
for the example implementation of the feedback function. - Added
@timeout
macro to allow for limiting the execution time of a block of code inAICode
viaexecution_timeout
kwarg (prevents infinite loops, etc.). See?AICode
for more information. - Added
preview(conversation)
utility that allows you to quickly preview the conversation in a Markdown format in your REPL. RequiresMarkdown
package for the extension to be loaded. - Added
ItemsExtract
convenience wrapper foraiextract
when you want to extract one or more of a specificreturn_type
(eg,return_type = ItemsExtract{MyMeasurement}
)
Fixed
- Fixed
aiembed
to accept any AbstractVector of documents (eg, a view of a vector of documents)
Commits
Merged pull requests:
v0.6.0
PromptingTools v0.6.0
Added
@ai_str
macros now support multi-turn conversations. Theai"something"
call will automatically remember the last conversation, so you can simply reply withai!"my-reply"
. If you send another message withai""
, you'll start a new conversation. Same for the asynchronous versionsaai""
andaai!""
.- Created a new default schema for Ollama models
OllamaSchema
(replacingOllamaManagedSchema
), which allows multi-turn conversations and conversations with images (eg, with Llava and Bakllava models).OllamaManagedSchema
has been kept for compatibility and as an example of a schema where one provides the prompt as a string (not dictionaries like OpenAI API).
Fixed
- Removed template
RAG/CreateQAFromContext
because it's a duplicate ofRAG/RAGCreateQAFromContext
Commits
Merged pull requests:
v0.5.0
PromptingTools v0.5.0
Added
- Experimental sub-module RAGTools providing basic Retrieval-Augmented Generation functionality. See
?RAGTools
for more information. It's all nested inside ofPromptingTools.Experimental.RAGTools
to signify that it might change in the future. Key functions arebuild_index
andairag
, but it also provides a suite to make evaluation easier (see?build_qa_evals
and?run_qa_evals
or just see the exampleexamples/building_RAG.jl
)
Fixed
- Stricter code parsing in
AICode
to avoid false positives (code blocks must end with "```\n" to catch comments inside text) - Introduced an option
skip_invalid=true
forAICode
, which allows you to include only code blocks that parse successfully (useful when the code definition is good, but the subsequent examples are not), and an optioncapture_stdout=false
to avoid capturing stdout if you want to evaluateAICode
in parallel (Pipe()
that we use is NOT thread-safe) OllamaManagedSchema
was passing an incorrect model name to the Ollama server, often serving the default llama2 model instead of the requested model. This is now fixed.- Fixed a bug in kwarg
model
handling when leveraging PT.MODEL_REGISTRY
Commits
Merged pull requests:
- fix AICode parser (#31) (@svilupp)
- Make stdout capture optional (#32) (@svilupp)
- Fallback parser to expect newlines (#33) (@svilupp)
- Fix model kwarg in Ollama (#34) (@svilupp)
- Enable ollama tests (#35) (@svilupp)
- Add RAG Tools (#36) (@svilupp)
- Update docs (#37) (@svilupp)
- Fix params kwarg in run_qa_evals (#38) (@svilupp)
v0.4.0
PromptingTools v0.4.0
Changes
- Improved AICode parsing and error handling (eg, capture more REPL prompts, detect parsing errors earlier, parse more code fence types), including the option to remove unsafe code (eg, Pkg.add("SomePkg")) with AICode(msg; skip_unsafe=true, vebose=true)
- Added new prompt templates: JuliaRecapTask, JuliaRecapCoTTask, JuliaExpertTestCode and updated JuliaExpertCoTTask to be more robust against early stopping for smaller OSS models
- Added support for MistralAI API via the MistralOpenAISchema(). All their standard models have been registered, so you should be able to just use model="mistral-tiny in your aigenerate calls without any further changes. Remember to either provide api_kwargs.api_key or ensure you have ENV variable MISTRALAI_API_KEY set.
- Added support for any OpenAI-compatible API via schema=CustomOpenAISchema(). All you have to do is to provide your api_key and url (base URL of the API) in the api_kwargs keyword argument. This option is useful if you use Perplexity.ai, Fireworks.ai, or any other similar services.
Merged pull requests:
v0.3.0
PromptingTools v0.3.0
Changes:
Added
- Introduced a set of utilities for working with generate Julia code (Eg, extract code-fenced Julia code with
PromptingTools.extract_code_blocks
) or simply applyAICode
to the AI messages.AICode
tries to extract, parse and eval Julia code, if it fails both stdout and errors are captured. It is useful for generating Julia code and, in the future, creating self-healing code agents - Introduced ability to have multi-turn conversations. Set keyword argument
return_all=true
andai*
functions will return the whole conversation, not just the last message. To continue a previous conversation, you need to provide it to a keyword argumentconversation
- Introduced schema
NoSchema
that does not change message format, it merely replaces the placeholders with user-provided variables. It serves as the first pass of the schema pipeline and allow more code reuse across schemas - Support for project-based and global user preferences with Preferences.jl. See
?PREFERENCES
docstring for more information. It allows you to persist your configuration and model aliases across sessions and projects (eg, if you would like to default to Ollama models instead of OpenAI's) - Refactored
MODEL_REGISTRY
aroundModelSpec
struct, so you can record the name, schema(!) and token cost of new models in a single place. The biggest benefit is that yourai*
calls will now automatically lookup the right model schema, eg, no need to define schema explicitly for your Ollama models! See?ModelSpec
for more information and?register_model!
for an example of how to register a new model
Fixed
- Changed type of global
PROMPT_SCHEMA::AbstractPromptSchema
for an easier switch to local models as a default option
Breaking Changes
API_KEY
global variable has been renamed toOPENAI_API_KEY
to align with the name of the environment variable and preferences
Merged pull requests:
- Use [!TIP] markdown for pro tips (#14) (@caleb-allen)
- Up version minor (#15) (@svilupp)
- Setup CodeCov (#16) (@svilupp)
- update registration + ollama health check (#17) (@svilupp)
- Change PROMPT SCHEMA to an Abstract Type (#18) (@svilupp)
- Add Coding Utils (#19) (@svilupp)
- Return full conversation (#20) (@svilupp)
- extend serialization support to DataMessages (#21) (@svilupp)
- remove julia prompt from code blocks (#22) (@svilupp)
- Change AICode Safety Error to be a Parsing Error (#23) (@svilupp)
- Parse nested code blocks (#24) (@svilupp)
- Preferences.jl integration + new model registry (#25) (@svilupp)
- Tag version v0.3.0 (#26) (@svilupp)
- Update changelog (#27) (@svilupp)
v0.2.0
PromptingTools v0.2.0
Added
- Add support for prompt templates with
AITemplate
struct. Search for suitable templates withaitemplates("query string")
and then simply use them withaigenerate(AITemplate(:TemplateABC); variableX = "some value") -> AIMessage
or use a dispatch on the template name as aSymbol
, eg,aigenerate(:TemplateABC; variableX = "some value") -> AIMessage
. Templates are saved as JSON files in the foldertemplates/
. If you add new templates, you can reload them withload_templates!()
(notice the exclamation mark to override the existingTEMPLATE_STORE
). - Add
aiextract
function to extract structured information from text quickly and easily. See?aiextract
for more information. - Add
aiscan
for image scanning (ie, image comprehension tasks). You can transcribe screenshots or reason over images as if they were text. Images can be provided either as a local file (image_path
) or as an url (image_url
). See?aiscan
for more information. - Add support for Ollama.ai's local models. Only
aigenerate
andaiembed
functions are supported at the moment. - Add a few non-coding templates, eg, verbatim analysis (see
aitemplates("survey")
) and meeting summarization (seeaitemplates("meeting")
), and supporting utilities (non-exported):split_by_length
andreplace_words
to make it easy to work with smaller open source models.
Merged pull requests:
- update version (#1) (@svilupp)
- Add Template Functionality (#2) (@svilupp)
- Add Extraction Functionality (#3) (@svilupp)
- Add more prompt templates (#4) (@svilupp)
- Update tests to account for new templates (#5) (@svilupp)
- Add aiscan (image comprehension) (#6) (@svilupp)
- Remove duplicated docstring in the function call signature (#7) (@svilupp)
- Add ollama support (#8) (@svilupp)
- Create docs from the README file (#9) (@svilupp)
- Add more templates (#10) (@svilupp)
- tag020 (#11) (@svilupp)
- Fail gracefully without api key (#12) (@svilupp)
v0.1.0
Full Changelog: https://github.com/svilupp/PromptingTools.jl/commits/v0.1.0
- Initial release