Chains vs Workflows vs Agents
- ▸Define chains, workflows, and agents as three distinct execution patterns
- ▸Identify the trade-off between control and autonomy
- ▸Know when each pattern is the right choice
Before you can choose an architecture, you need a shared vocabulary. The AI engineering world throws around "chain", "workflow", and "agent" loosely, but they describe three fundamentally different execution models. Let's nail them down.
Pattern 1 — Chains (fixed pipeline)
A chain is the simplest pattern: a sequence of steps that always runs in the same order. Think of the story pipeline from Session 1 — generate text, then generate an image, then generate speech. No branching, no decisions, no loops.
Pattern 2 — Workflows (predetermined steps with gates)
A workflow adds decision points (gates) between steps. The overall set of steps is still predetermined by the developer, but which steps actually run depends on the output of previous steps. Think of an if/else branch after each LLM call — "if the classification is X, go left; if Y, go right."
Workflows give you more flexibility than chains while keeping the developer in control. You define every possible path — the LLM just decides which path to take based on the data.
Pattern 3 — Agents (autonomous tool loops)
An agent is fundamentally different: the LLM itself decides what to do next. It observes the current state, thinks about what action to take, acts (usually by calling a tool), and then observes the result. This is the ReAct loop you learned about in Session 6.
The key difference: in chains and workflows, you write the control flow. In agents, the LLM writes the control flow at runtime. This makes agents powerful but harder to predict and debug.
Side-by-side comparison
- •Control: Developer controls all steps
- •Flexibility: None — always same path
- •Complexity: Very low
- •Debugging: Easy — linear trace
- •Use case: ETL, simple pipelines, batch jobs
- •Control: Developer defines all paths
- •Flexibility: Medium — branches on data
- •Complexity: Moderate
- •Debugging: Moderate — trace each gate
- •Use case: Classification routing, multi-step with validation
- •Control: LLM decides next action
- •Flexibility: High — open-ended
- •Complexity: High
- •Debugging: Hard — non-deterministic
- •Use case: Open-ended tasks, complex tool use, research