/
langchain.rst
204 lines (152 loc) · 13.4 KB
/
langchain.rst
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
======================
LangChain
======================
Here we have some code snippets that help compare a vanilla code implementation
with LangChain and Hamilton.
LangChain's focus is on hiding details and making code terse.
Hamilton's focus instead is on making code more readable, maintainable, and importantly customizeable.
So don't be surprised that Hamilton's code is "longer" - that's by design. There is
also little abstraction between you, and the underlying libraries with Hamilton.
With LangChain they're abstracted away, so you can't really see easily what's going on
underneath.
*Rhetorical question*: which code would you rather maintain, change, and update?
----------------------
A simple joke example
----------------------
.. table:: Simple Invocation
:align: left
+-----------------------------------------------------------+----------------------------------------------------------+-------------------------------------------------------------+
| Hamilton | Vanilla | LangChain |
+===========================================================+==========================================================+=============================================================+
| .. literalinclude:: langchain_snippets/hamilton_invoke.py | .. literalinclude:: langchain_snippets/vanilla_invoke.py | .. literalinclude:: langchain_snippets/lcel_invoke.py |
| | | |
+-----------------------------------------------------------+----------------------------------------------------------+-------------------------------------------------------------+
.. figure:: langchain_snippets/hamilton-invoke.png
:alt: Structure of the Hamilton DAG
:align: center
:width: 50%
The Hamilton DAG visualized.
-----------------------
A streamed joke example
-----------------------
With Hamilton we can just swap the call function to return a streamed response.
Note: you could use @config.when to include both streamed and non-streamed versions in the same DAG.
.. table:: Streamed Version
:align: left
+-------------------------------------------------------------+------------------------------------------------------------+---------------------------------------------------------------+
| Hamilton | Vanilla | LangChain |
+=============================================================+============================================================+===============================================================+
| .. literalinclude:: langchain_snippets/hamilton_streamed.py | .. literalinclude:: langchain_snippets/vanilla_streamed.py | .. literalinclude:: langchain_snippets/lcel_streamed.py |
| | | |
+-------------------------------------------------------------+------------------------------------------------------------+---------------------------------------------------------------+
.. figure:: langchain_snippets/hamilton-streamed.png
:alt: Structure of the Hamilton DAG
:align: center
:width: 50%
The Hamilton DAG visualized.
-------------------------------
A "batch" parallel joke example
-------------------------------
In this batch example, the joke requests are parallelized.
Note: with Hamilton you can delegate to many different backends for parallelization,
e.g. Ray, Dask, etc. We use multi-threading here.
.. table:: Batch Parallel Version
:align: left
+-------------------------------------------------------------+------------------------------------------------------------+---------------------------------------------------------------+
| Hamilton | Vanilla | LangChain |
+=============================================================+============================================================+===============================================================+
| .. literalinclude:: langchain_snippets/hamilton_batch.py | .. literalinclude:: langchain_snippets/vanilla_batch.py | .. literalinclude:: langchain_snippets/lcel_batch.py |
| | | |
+-------------------------------------------------------------+------------------------------------------------------------+---------------------------------------------------------------+
.. figure:: langchain_snippets/hamilton-batch.png
:alt: Structure of the Hamilton DAG
:align: center
:width: 75%
The Hamilton DAG visualized.
----------------------
A "async" joke example
----------------------
Here we show how to make the joke using async constructs. With Hamilton
you can mix and match async and regular functions, the only change
is that you need to use the async Hamilton Driver.
.. table:: Async Version
:align: left
+-------------------------------------------------------------+------------------------------------------------------------+---------------------------------------------------------------+
| Hamilton | Vanilla | LangChain |
+=============================================================+============================================================+===============================================================+
| .. literalinclude:: langchain_snippets/hamilton_async.py | .. literalinclude:: langchain_snippets/vanilla_async.py | .. literalinclude:: langchain_snippets/lcel_async.py |
| | | |
+-------------------------------------------------------------+------------------------------------------------------------+---------------------------------------------------------------+
.. figure:: langchain_snippets/hamilton-async.png
:alt: Structure of the Hamilton DAG
:align: center
:width: 50%
The Hamilton DAG visualized.
---------------------------------
Switch LLM to completion for joke
---------------------------------
Here we show how to make the joke switching to a different openAI model that is for completion.
Note: we use the @config.when construct to augment the original DAG and add a new function
that uses the different OpenAI model.
.. table:: Completion Version
:align: left
+------------------------------------------------------------------+-----------------------------------------------------------------+---------------------------------------------------------------+
| Hamilton | Vanilla | LangChain |
+==================================================================+=================================================================+===============================================================+
| .. literalinclude:: langchain_snippets/hamilton_completion.py | .. literalinclude:: langchain_snippets/vanilla_completion.py | .. literalinclude:: langchain_snippets/lcel_completion.py |
| | | |
+------------------------------------------------------------------+-----------------------------------------------------------------+---------------------------------------------------------------+
.. figure:: langchain_snippets/hamilton-completion.png
:alt: Structure of the Hamilton DAG
:align: center
:width: 50%
The Hamilton DAG visualized with configuration provided for the completion path. Note the dangling node - that's normal, it's not used in the completion path.
---------------------------------
Switch to using Anthropic
---------------------------------
Here we show how to make the joke switching to use a different model provider, in this case
it's Anthropic.
Note: we use the @config.when construct to augment the original DAG and add a new functions
to use Anthropic.
.. table:: Anthropic Version
:align: left
+------------------------------------------------------------------+-----------------------------------------------------------------+---------------------------------------------------------------+
| Hamilton | Vanilla | LangChain |
+==================================================================+=================================================================+===============================================================+
| .. literalinclude:: langchain_snippets/hamilton_anthropic.py | .. literalinclude:: langchain_snippets/vanilla_anthropic.py | .. literalinclude:: langchain_snippets/lcel_anthropic.py |
| | | |
+------------------------------------------------------------------+-----------------------------------------------------------------+---------------------------------------------------------------+
.. figure:: langchain_snippets/hamilton-anthropic.png
:alt: Structure of the Hamilton DAG
:align: center
:width: 50%
The Hamilton DAG visualized with configuration provided to use Anthropic.
---------------------------------
Logging
---------------------------------
Here we show how to log more information about the joke request. Hamilton has
lots of customization options, and one out of the box is to log more information via
printing.
.. table:: Logging
:align: left
+------------------------------------------------------------------+-----------------------------------------------------------------+---------------------------------------------------------------+
| Hamilton | Vanilla | LangChain |
+==================================================================+=================================================================+===============================================================+
| .. literalinclude:: langchain_snippets/hamilton_logging.py | .. literalinclude:: langchain_snippets/vanilla_logging.py | .. literalinclude:: langchain_snippets/lcel_logging.py |
| | | |
+------------------------------------------------------------------+-----------------------------------------------------------------+---------------------------------------------------------------+
---------------------------------
Fallbacks
---------------------------------
Fallbacks are pretty situation and context dependent. It's not that
hard to wrap a function in a try/except block. The key is to make sure
you know what's going on, and that a fallback was triggered. So in our
opinion it's better to be explicit about it.
.. table:: Logging
:align: left
+------------------------------------------------------------------+-----------------------------------------------------------------+---------------------------------------------------------------+
| Hamilton | Vanilla | LangChain |
+==================================================================+=================================================================+===============================================================+
| .. literalinclude:: langchain_snippets/hamilton_fallbacks.py | .. literalinclude:: langchain_snippets/vanilla_fallbacks.py | .. literalinclude:: langchain_snippets/lcel_fallbacks.py |
| | | |
+------------------------------------------------------------------+-----------------------------------------------------------------+---------------------------------------------------------------+