Session 6· Reference
Troubleshooting Guide
What you'll learn
- ▸Identify LangChain-specific import and provider errors
- ▸Fix tool-calling and structured-output bugs
- ▸Know how to enable verbose mode and read raw messages
LangChain has more moving parts than the raw SDKs, so error messages can be harder to decode at first. Every issue below is one Session 6 students actually hit — click any row to expand the cause and fix.
Installation & import errors
Provider / key errors
Tool-calling bugs
Structured output issues
Streaming and batch surprises
"stream() prints nothing"
Make sure your print uses end="", flush=True. Without flush=True, Python buffers the output until a newline — which can make streaming look broken even though it is working.
"batch() is slower than expected"
LangChain parallelises up to max_concurrency (default 5). For many prompts pass max_concurrency=20 (or more) to model.batch(inputs, config={"max_concurrency": 20}).
"Claude ignores my parallel tools"
Provider-specific. Claude tends to make one tool call at a time even when OpenAI would parallelise. Test with each provider you plan to use.
When nothing above helps
- Print raw message objects: print(result) — LangChain message classes have readable reprs that show everything.
- Enable verbose mode: set langchain.debug = True at the top of your file to see every API call and response.
- Check versions: pip show langchain langchain-openai. Mismatched versions cause subtle bugs.
- Consult the LangChain docs at python.langchain.com for the latest API — the library moves fast.