An framework for constructing language model agents and training on constructive tasks.
This repo models agent-environment interactions using a Partially Observable Markov Decision Process (POMDP). Inspired by POMDP, this repo's name ldp
stands for Language Decision Processes.
To install ldp
:
If you plan to export Graphviz visualizations, make sure you also install the graphviz
library into your OS via:
Linux: apt install graphviz
macOS: brew install graphviz
An agent is something that interacts with an environment (defined in our other GitHub repo Future-House/aviary).
An agent uses tools in response to observations, which are just natural language observations. An agent has two functions:
get_asv(agent_state, obs)
chooses an action (a
) conditioned on the observation messages, and returns the next agent state (s
) and a value estimate (v
). The first argument, agent_state
, is a state specific for the agent. The state is outside of the agent so agents are functional, enabling batching across environments. You can make the state None
if you aren't using it. It could contain things like memory, as a list of previous observations and actions.
The obs
are not the complete list of all prior observations, but rather the return of env.step
. Usually the state should keep track of these.
Value is the agent's state-action value estimate; it can default to 0. This is used for training with reinforcement learning.
You can just emit actions directly if you want:
but likely you want to do something more sophisticated. Here's how our SimpleAgent
- which just relies on a single LLM call - works (typing omitted):
Notice how it's pretty simple. We have to do some bookkeeping - namely appending messages as they come and passing tools. There is no magic here.
We do have a compute graph - which helps if you want to differentiate with respect to parameters inside your agent (including possibly the LLM). If your compute graph looks like the above example - where all you do is call an LLM directly, then don't worry about this.
If you want to do more complex agents and train them, then read on. Let's start with an example compute graph
This creates a compute graph and executes it. The compute graph is silly - just doubles the input. The compute graph executions and gradients are saved in a context for later use, like training updates. For example:
Now, inside the SimpleAgent
example above, you can see some of the compute graph. Let's see a more complex example for an agent that has a memory it can draw upon.
You can see in this example that we use differentiable ops to ensure there is a connection in the compute graph from the LLM result (action) back to things like the memory retrieval and the query used to retrieve the memory.
Why use a compute graph? Aside from a gradient, using the compute graph enables the tracking of all inputs/outputs to the ops and serialization/deserialization of the compute graph so that you can easily save/load them. The tracking of input/outputs also makes it easier to do things like fine-tuning or reinforcement learning on the underlying LLMs.
The Agent
(as well as classes in agent.ops
) are generics, which means:
Agent
is designed to support arbitrary types
Subclasses can exactly specify state types, making the code more readable
If you are new to Python generics (typing.Generic
), please read about them in Python typing.
Below is how to specify an agent with a custom state type.
See a tutorial of building and running an agent for GSM8K