Session 6· 01· 10 min
Model Initialisation & Parameters
What you'll learn
- ▸Pass temperature, max_tokens, timeout, max_retries to init_chat_model
- ▸Understand which parameters are provider-agnostic
- ▸See the unified constructor interface
init_chat_model accepts common LLM parameters directly. Temperature, max_tokens, timeout, and max_retries work the same way across every provider — LangChain maps them to the right field in each underlying SDK.
scripts/01_model_init_and_params.py
model = init_chat_model(
f"openai:{model_name}", ①
temperature=0.2, ②
max_tokens=120, ③
timeout=30, ④
max_retries=2, ⑤
)
response = model.invoke(
"In exactly one line, explain why model parameters matter."
)
print(response.content)①Provider prefix + model name — switch providers by changing only this string.
②temperature — same 0.0 deterministic to 1.2 creative scale as Session 1.
③max_tokens — cap on response length.
④timeout — how long to wait for a response before giving up (seconds).
⑤max_retries — how many times LangChain should auto-retry transient errors.
$ python scripts/01_model_init_and_params.py
max_retries is free resilience
With max_retries=2, a transient 500 error or rate-limit will be automatically retried (with exponential backoff). You get three attempts total with one config line instead of writing a retry loop.
Knowledge Check
Which constructor argument makes LangChain retry transient errors automatically?
Recap — what you just learned
- ✓init_chat_model accepts common LLM params directly
- ✓temperature, max_tokens, timeout, max_retries work across all providers
- ✓max_retries gives you automatic retry logic for free