The 5 Workflow Patterns
- ▸Name and describe each of the 5 workflow patterns
- ▸Know which pattern to use for a given scenario
- ▸Understand the complexity and parallelism trade-offs of each
Workflows sit in the sweet spot between chains and agents: the developer keeps control of the overall structure, but individual steps can involve LLM calls, branching, and even parallel execution. The LangChain documentation identifies five canonical patterns. Let's walk through each.
1. Prompt Chaining
The simplest workflow. Each step takes the output of the previous step as input. You add validation gates between steps so you can catch problems early instead of letting errors cascade. Think of it as a chain with quality checks.
Real use case: Generate a marketing email, then validate it against brand guidelines, then translate it into three languages.
2. Parallelization
When parts of a task are independent, run them at the same time and aggregate the results. This reduces latency dramatically — if three LLM calls each take 2 seconds, running them in parallel takes 2 seconds total instead of 6.
Real use case: Analyze a document for sentiment, extract key entities, and generate a summary — all simultaneously, then combine into a report.
3. Routing
A classifier examines the input and routes it to the appropriate specialized handler. Each handler can be optimized for its specific task with a tailored prompt, different model, or dedicated tools.
Real use case: A support ticket arrives. A classifier routes it to "billing", "technical", or "general inquiry" — each with its own prompt and knowledge base.
4. Orchestrator-Worker
A central LLM (the orchestrator) breaks a complex task into subtasks, dispatches them to specialized workers, and then synthesizes the results. Unlike parallelization, the subtasks are determined dynamically by the orchestrator rather than hardcoded.
Real use case: "Refactor this codebase" — the orchestrator identifies which files need changes, sends each to a worker, then verifies the combined result compiles.
5. Evaluator-Optimizer
One LLM generates output, and another evaluates it. If the evaluation fails, the generator tries again with the feedback. This loop continues until the quality bar is met or a retry limit is hit. It's like having a writer and an editor working together.
Real use case: Generate code, then run unit tests against it. If tests fail, feed the errors back and regenerate. Repeat until all tests pass.
Comparison matrix
Here is how the five patterns compare across key dimensions:
| Complexity | Parallelism | Dynamic control | When to use | |
|---|---|---|---|---|
| Chaining | Low | Sequential steps with validation | ||
| Parallel | Low | Independent subtasks | ||
| Routing | Medium | Input-dependent specialization | ||
| Orchestrator | High | Complex tasks needing decomposition | ||
| Evaluator | Medium | Output quality must be verified |