Building Effective AI Agents: The Essential Role of Descriptions and Runbooks
AI Agents are typically multi-LLM constructs that access APIs to perform tasks assigned to them via prompts.
The scope of what an AI Agent can, should, and should not do is governed at five levels:
- The LLM’s inherent guardrails and safety mechanisms
- The role and runbook of the Agent
- The knowledge files provided to the Agent
- The prompt that triggers the Agent
- The API actions accessible to the Agent
In this blog post, we’ll focus on point number two – the description and runbook.
These two fields of text should not be confused with the prompt. While the prompt commands the agent to perform “something,” the description and runbook provide consistent guidance on how the agent should complete tasks effectively. Unlike a prompt, which can change with each interaction, the description and runbook should stay the same. Though it’s possible to include all relevant details in a prompt, this approach is inefficient for repeated prompts. As a rule of thumb, if certain preparatory lines are always needed, place them in the runbook. For handling critical business data, well-defined descriptions and runbooks are essential to ensure that the agent performs exactly as the process owner expects—and nothing else.
The role
The Agent’s role is a short statement explaining the Agent’s purpose. Typically the role is defined in the runbook. It isn’t just for user reference; it also provides context that helps the Agent function more effectively.
The runbook
The runbook is the key to a well-functioning AI agent. It contains clear instructions in natural language that describe each action the Agent should perform, the triggers for each action, and the desired outcomes. Whereas many automation solutions require flowcharts to lay down the logic of the solution, a runbook is a work instruction-type of document that goes through the actions of the process, and their dependencies, one by one. Depending on the action’s complexity, you may need to include detailed steps to ensure consistent performance. Repeatability is crucial—additional instructions are likely needed if the agent produces varied results for the same task.
The more tasks the agent needs to handle, the longer the runbook. Each action should have its own section, specifying exactly how outputs should be provided. Including examples of the expected input and output formats is highly effective, enabling the agent to consistently operate as expected. However, be mindful that longer instructions consume more input tokens. To optimize, consider using multiple agents, each handling specific tasks. This reduces token usage, as each agent must only perform a subset of actions rather than referencing an exhaustive list every time. Chaining agents together enhances results and simplifies debugging and fine-tuning.
For more guidance on building a robust, enterprise-grade AI agent, contact our experts!