Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion docs/lab-5/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -95,7 +95,7 @@ Go ahead and save it to your local machine, and be ready to grab it.
!!! note
Granite 4 has newer data, so since this lab was created, it DOES have the 2024 data. If you find that's the case, you can try it with the question about 2025 using the 2025 full-year budget using the link below.

![budget_fy2025.pdf](https://www.whitehouse.gov/wp-content/uploads/2024/03/budget_fy2025.pdf)
[budget_fy2025.pdf](https://www.whitehouse.gov/wp-content/uploads/2024/03/budget_fy2025.pdf)

Now spin up a **New Workspace**, (yes, please a new workspace, it seems that sometimes AnythingLLM has
issues with adding things, so a clean environment is always easier to teach in) and call it
Expand Down
13 changes: 4 additions & 9 deletions docs/lab-7/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -75,14 +75,11 @@ python
import mellea

m = mellea.start_session()
print(m.chat("What is the etymology of mellea?").content)
print(m.chat("tell me some fun trivia about IBM and the early history of AI.").content)
```
You can either add this to a file like `main.py` or run it in the python REPL, if you get output
you are set up to dig deeper with Mellea.

!!! note
If you see an error message with: "ModuleNotFoundError: No module named 'PIL'" then you will need to install the python package pillow with "pip install pillow"

## Simple email examples

!!! note
Expand Down Expand Up @@ -158,7 +155,7 @@ by changing from "only lower-case" to "only upper-case" and see that it will fol

Pretty neat eh? Lets go even deeper.

Let's create an email with some sampling and have Mellea, find the best option for what we are looking for:
Let's create an email with some sampling and have Mellea find the best option for what we are looking for:
We add two requirements to the instruction which will be added to the model request.
But we don't check yet if these requirements are satisfied, we add a strategy for validating the requirements.

Expand Down Expand Up @@ -196,9 +193,7 @@ print(
)
)
```
You might notice it fails with the above example, just remove the `"Use only lower-case letters",` line, and
it should pass on the first re-run. This brings up some interesting opportunities, so make sure that the
writing you expect is within the boundaries and it'll keep trying till it gets it right.
You might notice it fails with the above example, because the name "Olivia" has an upper-case letter in it. Remove the `"Use only lower-case letters",` line, and it should pass on the first re-run. This brings up some interesting opportunities, so make sure that the writing you expect is within the boundaries and it'll keep trying till it gets it right.

## Instruct Validate Repair

Expand Down Expand Up @@ -241,7 +236,7 @@ We create 3 requirements:

- First requirement (r1) will be validated by LLM-as-a-judge on the output of the instruction. This is the default behavior.
- Second requirement (r2) uses a function that takes the output of a sampling step and returns a boolean value indicating successful or unsuccessful validation. While the validation_fn parameter requires to run validation on the full session context, Mellea provides a wrapper for simpler validation functions (simple_validate(fn: Callable[[str], bool])) that take the output string and return a boolean as seen in this case.
- Third requirement is a check(). Checks are only used for validation, not for generation. Don't think mention purple elephants.
- Third requirement is a check(). Checks are only used for validation, not for generation. Checks aim to avoid the "do not think about B" effect that often primes models (and humans) to do the opposite and "think" about B.

Run this in your local instance, and you'll see it working, and ideally no purple elephants! :)

Expand Down