|
1 | 1 | # Running Examples
|
2 | 2 |
|
3 | 3 | Run the examples in this directory with:
|
| 4 | + |
4 | 5 | ```sh
|
5 | 6 | # Run example
|
6 | 7 | python3 examples/<example>.py
|
| 8 | + |
| 9 | +# or with uv |
| 10 | +uv run examples/<example>.py |
7 | 11 | ```
|
8 | 12 |
|
9 | 13 | See [ollama/docs/api.md](https://github.com/ollama/ollama/blob/main/docs/api.md) for full API documentation
|
10 | 14 |
|
11 | 15 | ### Chat - Chat with a model
|
| 16 | + |
12 | 17 | - [chat.py](chat.py)
|
13 | 18 | - [async-chat.py](async-chat.py)
|
14 | 19 | - [chat-stream.py](chat-stream.py) - Streamed outputs
|
15 | 20 | - [chat-with-history.py](chat-with-history.py) - Chat with model and maintain history of the conversation
|
16 | 21 |
|
17 |
| - |
18 | 22 | ### Generate - Generate text with a model
|
| 23 | + |
19 | 24 | - [generate.py](generate.py)
|
20 | 25 | - [async-generate.py](async-generate.py)
|
21 | 26 | - [generate-stream.py](generate-stream.py) - Streamed outputs
|
22 | 27 | - [fill-in-middle.py](fill-in-middle.py) - Given a prefix and suffix, fill in the middle
|
23 | 28 |
|
24 |
| - |
25 | 29 | ### Tools/Function Calling - Call a function with a model
|
| 30 | + |
26 | 31 | - [tools.py](tools.py) - Simple example of Tools/Function Calling
|
27 | 32 | - [async-tools.py](async-tools.py)
|
28 | 33 | - [multi-tool.py](multi-tool.py) - Using multiple tools, with thinking enabled
|
29 | 34 |
|
30 |
| - #### gpt-oss |
| 35 | +#### gpt-oss |
| 36 | + |
31 | 37 | - [gpt-oss-tools.py](gpt-oss-tools.py)
|
32 |
| -- [gpt-oss-tools-stream.py](gpt-oss-tools-stream.py) |
| 38 | +- [gpt-oss-tools-stream.py](gpt-oss-tools-stream.py) |
33 | 39 | - [gpt-oss-tools-browser.py](gpt-oss-tools-browser.py) - Using browser research tools with gpt-oss
|
34 | 40 | - [gpt-oss-tools-browser-stream.py](gpt-oss-tools-browser-stream.py) - Using browser research tools with gpt-oss, with streaming enabled
|
35 | 41 |
|
| 42 | +### Web search |
| 43 | + |
| 44 | +An API key from Ollama's cloud service is required. You can create one [here](https://ollama.com/settings/keys). |
| 45 | + |
| 46 | +```shell |
| 47 | +export OLLAMA_API_KEY="your_api_key_here" |
| 48 | +``` |
| 49 | + |
| 50 | +- [web-search.py](web-search.py) |
| 51 | + |
| 52 | +#### MCP server |
| 53 | + |
| 54 | +The MCP server can be used with an MCP client like Cursor, Cline, Codex, Open WebUI, Goose, and more. |
| 55 | + |
| 56 | +```sh |
| 57 | +uv run examples/web-search-mcp.py |
| 58 | +``` |
| 59 | + |
| 60 | +Configuration to use with an MCP client: |
| 61 | + |
| 62 | +```json |
| 63 | +{ |
| 64 | + "mcpServers": { |
| 65 | + "web_search": { |
| 66 | + "type": "stdio", |
| 67 | + "command": "uv", |
| 68 | + "args": ["run", "path/to/ollama-python/examples/web-search-mcp.py"], |
| 69 | + "env": { "OLLAMA_API_KEY": "your_api_key_here" } |
| 70 | + } |
| 71 | + } |
| 72 | +} |
| 73 | +``` |
| 74 | + |
| 75 | +- [web-search-mcp.py](web-search-mcp.py) |
36 | 76 |
|
37 | 77 | ### Multimodal with Images - Chat with a multimodal (image chat) model
|
| 78 | + |
38 | 79 | - [multimodal-chat.py](multimodal-chat.py)
|
39 | 80 | - [multimodal-generate.py](multimodal-generate.py)
|
40 | 81 |
|
41 |
| - |
42 | 82 | ### Structured Outputs - Generate structured outputs with a model
|
| 83 | + |
43 | 84 | - [structured-outputs.py](structured-outputs.py)
|
44 | 85 | - [async-structured-outputs.py](async-structured-outputs.py)
|
45 | 86 | - [structured-outputs-image.py](structured-outputs-image.py)
|
46 | 87 |
|
47 |
| - |
48 | 88 | ### Ollama List - List all downloaded models and their properties
|
49 |
| -- [list.py](list.py) |
50 | 89 |
|
| 90 | +- [list.py](list.py) |
51 | 91 |
|
52 | 92 | ### Ollama Show - Display model properties and capabilities
|
53 |
| -- [show.py](show.py) |
54 | 93 |
|
| 94 | +- [show.py](show.py) |
55 | 95 |
|
56 | 96 | ### Ollama ps - Show model status with CPU/GPU usage
|
57 |
| -- [ps.py](ps.py) |
58 | 97 |
|
| 98 | +- [ps.py](ps.py) |
59 | 99 |
|
60 | 100 | ### Ollama Pull - Pull a model from Ollama
|
| 101 | + |
61 | 102 | Requirement: `pip install tqdm`
|
62 |
| -- [pull.py](pull.py) |
63 | 103 |
|
| 104 | +- [pull.py](pull.py) |
64 | 105 |
|
65 | 106 | ### Ollama Create - Create a model from a Modelfile
|
66 |
| -- [create.py](create.py) |
67 | 107 |
|
| 108 | +- [create.py](create.py) |
68 | 109 |
|
69 | 110 | ### Ollama Embed - Generate embeddings with a model
|
70 |
| -- [embed.py](embed.py) |
71 | 111 |
|
| 112 | +- [embed.py](embed.py) |
72 | 113 |
|
73 | 114 | ### Thinking - Enable thinking mode for a model
|
| 115 | + |
74 | 116 | - [thinking.py](thinking.py)
|
75 | 117 |
|
76 | 118 | ### Thinking (generate) - Enable thinking mode for a model
|
| 119 | + |
77 | 120 | - [thinking-generate.py](thinking-generate.py)
|
78 | 121 |
|
79 | 122 | ### Thinking (levels) - Choose the thinking level
|
| 123 | + |
80 | 124 | - [thinking-levels.py](thinking-levels.py)
|
0 commit comments