Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Prepare 2.0.0 release #68

Merged
merged 2 commits into from
Mar 25, 2023
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion .bumpversion.cfg
Original file line number Diff line number Diff line change
@@ -1,5 +1,5 @@
[bumpversion]
current_version = 1.0.3
current_version = 2.0.0
parse = (?P<major>\d+)\.(?P<minor>\d+)\.(?P<patch>\d+)
serialize =
{major}.{minor}.{patch}
Expand Down
50 changes: 38 additions & 12 deletions CHANGELOG.md
Original file line number Diff line number Diff line change
Expand Up @@ -4,23 +4,49 @@

## Unreleased

#### Enhancements:
## `2.0.0`

- **Code refactoring**: Improved code organization and readability by extracting functions for crafting messages and generating smaller reprs with `craft_message`, `repr_genai_pandas`, and `repr_genai`. 🧰
- **Enhanced pandas support**: Optimized DataFrame and Series representation for GPT-3 and GPT-4 by using Markdown format. 📊
- **Token management**: Introduced a new module `tokens.py` with utility functions `num_tokens_from_messages` and `trim_messages_to_fit_token_limit` to handle token count and message trimming based on model limitations and your wallet. 💸
### Enhancements:

#### Changes:
#### Added

- `craft_user_message` now uses the new `craft_message` function.
- `craft_output_message` now uses the new `repr_genai` function.
- The `get_historical_context` function now accepts an additional `model` parameter to support different GPT models and has been updated to use `tokens.trim_messages_to_fit_token_limit`.
- The `ignore_tokens` list now uses the term "first line" instead of "start" for clarity.
- Introduced support for GPT-4 token counting and message trimming in `tokens.py`.
- 🔄 Keep conversations flowing with `%%assist` (#66)
- 🖼️ Emit suggestions as `Markdown` instead of creating new cells (#66)
- 🚀 Model selection made easy with the `--model` flag for `%%assist` (#65)
- 💡 Introducing `GenaiMarkdown` – a dynamic Markdown display (#61)
- 📝 Create a `%%prompt` magic for setting the default prompts for assistance and exceptions (#71, #69)

#### Bug Fixes:
#### Changed

- N/A
- 🧪 Craft a more ipythonic context manager (#62, #66)
- Meet the new `Context` class: capture IPython history and make it ChatCompletion-friendly
- Farewell `get_historical_context`, hello `build_context`: context construction using the new Context class
- Reduce messages sent to GPT models by trimming based on estimated number of tokens (#57)

- 🎯 Type annotations step in! (#59)


#### Improved

- 📏 Token length checks now available in %%assist (#57)
- 🧹 Code refactoring: introducing `craft_message`, `repr_genai_pandas`, and `repr_genai` for more organized and readable code
- 📈 Enhanced pandas support: optimized DataFrame and Series representation for Large Language Model consumption using Markdown format
- 💰 Token management: a new module `tokens.py` featuring `num_tokens_from_messages` and `trim_messages_to_fit_token_limit` to help you stay within model limitations and budget
- 📚 Update assist magic documentation (#70)

#### Removed

- 🚫 `%%assist` no longer generates new code cells. It now creates Markdown output instead (#66)
- Relatedly, `in-place` is no longer an option since we do not change the cells


### Changes:

- `craft_user_message` now relies on the new `craft_message` function
- `craft_output_message` has been upgraded to use the new `repr_genai` function
- `get_historical_context` now sports an additional `model` parameter and utilizes `tokens.trim_messages_to_fit_token_limit`
- For clarity, the `ignore_tokens` list now uses the term "first line" instead of "start"
- GPT-4 token counting and message trimming now supported in `tokens.py`

## `1.0.3`

Expand Down
2 changes: 1 addition & 1 deletion genai/_version.py
Original file line number Diff line number Diff line change
@@ -1 +1 @@
__version__ = "1.0.3"
__version__ = "2.0.0"
2 changes: 1 addition & 1 deletion pyproject.toml
Original file line number Diff line number Diff line change
Expand Up @@ -5,7 +5,7 @@

[tool.poetry]
name = "genai"
version = "1.0.3"
version = "2.0.0"
description = "Generative AI for IPython (enhance your code cells)"
authors = ["Kyle Kelley <rgbkrk@gmail.com>"]
maintainers = ["Kyle Kelley <rgbkrk@gmail.com>"]
Expand Down