Session 6· 04· 10 min

Batch & batch_as_completed

What you'll learn
  • Send multiple prompts in one call with model.batch()
  • Use batch_as_completed() to process results as they finish
  • Understand when batching is faster than sequential invoke()

batch() is "invoke() but with a list". LangChain fires the calls concurrently, so you get the throughput of parallel execution without writing any asyncio. Huge speedup when you have many independent prompts to run.

scripts/04_batch_and_batch_as_completed.py
prompts = ["Name a color.", "Name a fruit.", "Name a country."]

# All at once — order-preserved
results = model.batch(prompts)                              ①
for r in results:
    print(r.content)

# As they finish — great for progress bars
for idx, result in model.batch_as_completed(prompts):       ②
    print(f"#{idx}: {result.content}")
batch() returns results in the SAME ORDER as inputs, even if faster ones finish first.
batch_as_completed yields (index, result) as each finishes — use when you want to process as-soon-as-possible.
$ python scripts/04_batch_and_batch_as_completed.py
batch()
ordered
  • Blocks until ALL finish
  • Returns list in input order
  • Use when you need all results before doing anything
batch_as_completed()
streaming
  • Yields (idx, result) as each finishes
  • Lets you process early results while others run
  • Use for progress bars, early display, or partial results
Knowledge Check
When would you prefer batch_as_completed() over batch()?
Recap — what you just learned
  • batch() takes a list, returns a list in order
  • batch_as_completed() yields (index, result) as each finishes
  • Both give you concurrent execution without async code
Next up: 05 — Tool Calling