diff --git a/README.md b/README.md index 07ccae3c3..09071d2a1 100644 --- a/README.md +++ b/README.md @@ -91,7 +91,7 @@ pip install 'prompt-declaration-language[examples]' You can run PDL with LLM models in local using [Ollama](https://ollama.com), or other cloud service. -If you use WatsonX: +If you use watsonx: ```bash export WATSONX_URL="https://{region}.ml.cloud.ibm.com" export WATSONX_APIKEY="your-api-key" diff --git a/docs/README.md b/docs/README.md index b3e66ea6e..0fe9b5bae 100644 --- a/docs/README.md +++ b/docs/README.md @@ -53,8 +53,8 @@ In order to run these examples, you need to create a free account on Replicate, get an API key and store it in the environment variable: - `REPLICATE_API_TOKEN` -In order to use foundation models hosted on [watsonx](https://www.ibm.com/watsonx) via LiteLLM, you need a WatsonX account (a free plan is available) and set up the following environment variables: -- `WATSONX_URL`, the API url (set to `https://{region}.ml.cloud.ibm.com`) of your WatsonX instance. The region can be found by clicking in the upper right corner of the watsonx dashboard (for example a valid region is `us-south` ot `eu-gb`). +In order to use foundation models hosted on [watsonx](https://www.ibm.com/watsonx) via LiteLLM, you need a watsonx account (a free plan is available) and set up the following environment variables: +- `WATSONX_URL`, the API url (set to `https://{region}.ml.cloud.ibm.com`) of your watsonx instance. The region can be found by clicking in the upper right corner of the watsonx dashboard (for example a valid region is `us-south` ot `eu-gb`). - `WATSONX_APIKEY`, the API key (see information on [key creation](https://cloud.ibm.com/docs/account?topic=account-userapikey&interface=ui#create_user_key)) - `WATSONX_PROJECT_ID`, the project hosting the resources (see information about [project creation](https://www.ibm.com/docs/en/watsonx/saas?topic=projects-creating-project) and [finding project ID](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-project-id.html?context=wx)). @@ -83,8 +83,8 @@ The PDL repository has been configured so that every `*.pdl` file is associated ``` The interpreter executes Python code specified in PDL code blocks. To sandbox the interpreter for safe execution, -you can use the `--sandbox` flag which runs the interpreter in a docker container. Without this flag, the interpreter -and all code is executed locally. To use the `--sandbox` flag, you need to have a docker daemon running, such as +you can use the `--sandbox` flag which runs the interpreter in a Docker-compatible container. Without this flag, the interpreter +and all code is executed locally. To use the `--sandbox` flag, you need to have a Docker daemon running, such as [Rancher Desktop](https://rancherdesktop.io). The interpreter prints out a log by default in the file `log.txt`. This log contains the details of inputs and outputs to every block in the program. It is useful to examine this file when the program is behaving differently than expected. The log displays the exact prompts submitted to models by LiteLLM (after applying chat templates), which can be @@ -150,7 +150,7 @@ Hello where the second `Hello` was produced by Granite. In general, PDL provides blocks for calling models, Python code, and makes it easy to compose them together with control structures (sequencing, conditions, loops). -A similar example on WatsonX would look as follows: +A similar example on watsonx would look as follows: ```yaml description: Hello world @@ -163,7 +163,7 @@ text: - '!' ``` -Notice the syntactic differences. Model ids on WatsonX start with `watsonx`. The `decoding_method` can be set to `greedy`, rather than setting the temperature to `0`. Also, `stop_sequences` are indicated with the keyword `stop` instead as a list of strings. +Notice the syntactic differences. Model ids on watsonx start with `watsonx`. The `decoding_method` can be set to `greedy`, rather than setting the temperature to `0`. Also, `stop_sequences` are indicated with the keyword `stop` instead as a list of strings. A PDL program computes 2 data structures. The first is a JSON corresponding to the result of the overall program, obtained by aggregating the results of each block. This is what is printed by default when we run the interpreter. The second is a conversational background context, which is a list of role/content pairs, where we implicitly keep track of roles and content for the purpose of communicating with models that support chat APIs. The contents in the latter correspond to the results of each block. The conversational background context is what is used to make calls to LLMs via LiteLLM. diff --git a/examples/cldk/cldk-assistant.pdl b/examples/cldk/cldk-assistant.pdl index 401098948..55a861d12 100644 --- a/examples/cldk/cldk-assistant.pdl +++ b/examples/cldk/cldk-assistant.pdl @@ -18,7 +18,7 @@ text: cldk = CLDK("java") cldk_state = cldk.analysis( project_path="${ project }", # Change this to the path of the project you want to analyze. - # langguage="java", # The langguage of the project. + # language="java", # The language of the project. # backend="codeanalyzer", # The backend to use for the analysis. # analysis_db="/tmp", # A temporary directory to store the analysis results. # sdg=True, # Generate the System Dependence Graph (SDG) for the project. diff --git a/examples/fibonacci/fib.pdl b/examples/fibonacci/fib.pdl index 394fd46cf..ef87e6052 100644 --- a/examples/fibonacci/fib.pdl +++ b/examples/fibonacci/fib.pdl @@ -1,30 +1,44 @@ +# Demonstrate using an LLM to write a program to compute Fibonacci numbers, +# and invoke generated code with a random number description: Fibonacci + text: +# Use IBM Granite to author a program that computes the Nth Fibonacci number - def: CODE model: replicate/ibm-granite/granite-3.0-8b-instruct input: "Write a Python function to compute the Fibonacci sequence. Do not include a doc string.\n\n" parameters: + # Request no randomness when generating code temperature: 0 - + +# Pick a random number 1..20 - "\nFind a random number between 1 and 20\n" - def: N lang: python code: | import random result = random.randint(1, 20) + - "\nNow computing fibonacci(${ N })\n" + +# Extract the LLM response inside backticks as executable Python code, and set PDL variable EXTRACTED - def: EXTRACTED lang: python code: | s = """'${ CODE } '""" result = s.split("```")[1].replace("python", "") + +# Run the extracted Fibonacci function using a random number - def: RESULT lang: python code: | ${ EXTRACTED } result = fibonacci(${ N }) + # (Don't store the result in the PDL context; store it in a PDL variable called RESULT) contribute: [] - 'The result is: ${ RESULT }' + +# Invoke the LLM again to explain the PDL context - "\n\nExplain what the above code does and what the result means\n\n" - model: replicate/ibm-granite/granite-3.0-8b-instruct \ No newline at end of file diff --git a/src/pdl/pdl.py b/src/pdl/pdl.py index b89d7a331..5af7736b7 100644 --- a/src/pdl/pdl.py +++ b/src/pdl/pdl.py @@ -157,7 +157,7 @@ def main(): parser.add_argument( "--sandbox", action=argparse.BooleanOptionalAction, - help="run the interpreter in a container, a docker daemon must be running", + help="run the interpreter in a container, a Docker-compatible daemon must be running", ) parser.add_argument( "-f", @@ -191,7 +191,7 @@ def main(): parser.add_argument( "--schema", action="store_true", - help="generate PDL Json Schema and exit", + help="generate PDL JSON Schema and exit", default=False, ) parser.add_argument(