@@ -39,7 +39,7 @@ print(model.identifier)
39
39
40
40
While you can provide an ` api_key ` keyword argument,
41
41
we recommend using [ python-dotenv] ( https://pypi.org/project/python-dotenv/ )
42
- to add ` LLAMA_STACK_API_KEY ="My API Key"` to your ` .env ` file
42
+ to add ` LLAMA_STACK_CLIENT_API_KEY ="My API Key"` to your ` .env ` file
43
43
so that your API Key is not stored in source control.
44
44
45
45
## Async usage
@@ -309,10 +309,10 @@ Note that requests that time out are [retried twice by default](#retries).
309
309
310
310
We use the standard library [ ` logging ` ] ( https://docs.python.org/3/library/logging.html ) module.
311
311
312
- You can enable logging by setting the environment variable ` LLAMA_STACK_LOG ` to ` info ` .
312
+ You can enable logging by setting the environment variable ` LLAMA_STACK_CLIENT_LOG ` to ` info ` .
313
313
314
314
``` shell
315
- $ export LLAMA_STACK_LOG =info
315
+ $ export LLAMA_STACK_CLIENT_LOG =info
316
316
```
317
317
318
318
Or to ` debug ` for more verbose logging.
@@ -425,7 +425,7 @@ import httpx
425
425
from llama_stack_client import LlamaStackClient, DefaultHttpxClient
426
426
427
427
client = LlamaStackClient(
428
- # Or use the `LLAMA_STACK_BASE_URL ` env var
428
+ # Or use the `LLAMA_STACK_CLIENT_BASE_URL ` env var
429
429
base_url = " http://my.test.server.example.com:8083" ,
430
430
http_client = DefaultHttpxClient(
431
431
proxy = " http://my.test.proxy.example.com" ,
0 commit comments