Quest- The main object. Ititialize it, then callasyncio.run(quest.run())to run it.Path- A Path is one "step" in a player's experience.ActionResolver- This is the real decision-making part of thePath. The two main ones are theSequentialActionResolver, which takes severalActions and just runs them in order, and theLlmActionResolver, which uses an LLM to decide what to do.starts_without_player_action- If this parameter isTrue, then aChangePathActionwithnext_action='path'can immediately call thisActionResolverwithout waiting for player input.
AgentAction- This is a single step a path takes. It's something like sending a message or moving to another path. They have anext_actionparameter which controls whether the player is prompted for an input or not before the next action happens.corevsimplvsplugins-coreis the main part of the library,implis the implementation of thecoreinterfaces, andpluginsare more niche implementations. For example, the path following code is incore, theLlmActionResolveris inimpl, and the Bag utilities are inplugins.Datastore- This is an interface for the quest to keep player state. The only one currently implemented isjsonfile, but another one could be implemented for a database or something.Messager- Like Datastore, this is an abstraction layer that allows the quest to send messages to the player. The only one currently implemented isslack.LlmActionResolver- There's a lot here so it gets its own bullet point. The LLM automatically talks to the player, and you can addLlmTools in theagent_actionsparameter to allow it to do more. Thesystem_promptparameter can either be a string or a callable taking anLlmContext. This allows the system prompt to change based on what's going on, which is recommended (LLMs are not good at keeping information secret).LlmTool- These describe an action the LLM can take.descriptionshould tell the LLM when to call the tool.paramsis a list ofLlmToolParams, which have anameand aschema(which is a JSON Schema object).availableandactionare callables that take anLlmContextand return a boolean and anAgentActionrespectively.availableshould returnTrueif the tool can be used, andactionshould return theAgentActionto take if the tool is used. These both getLlmContexts, andactionalso gets adictof the params the LLM called the action with.strictcan turn on structured output, which forces the LLM's output to conform to theparamsperfectly but disables some JSON Schema properties. See more in the OpenAI docs.. Ifstrictis off, don't trust the LLM to call your function correctly.
There is an example project in the examples folder.
.\venv\Scripts\activate
py -m build
python -m twine upload dist/*
- download the update bag.proto
pip install grpcio grpcio_tools- run
cd .\src\longchain\plugins\bag\api\ - run
python -m grpc_tools.protoc -I . --grpc_python_out . --python_out . --pyi_out . bag.proto - Go to the generated bag_pb2_grpc.py and change the
bag_pb2import toimport longchain.plugins.bag.api.bag_pb2 as bag__pb2
It's a tool for making long chains of NPC interactions.