Session 6· 00· 10 min

Provider Smoke Tests

What you'll learn
  • Verify OpenAI, Anthropic, and Google keys all work
  • See the same LangChain code with three different provider prefixes
  • Know which key to troubleshoot if a test fails
Before you start
Copy your OpenAI key into Session6/.env. If you also have Anthropic and Google keys, add them — but you only strictly need OpenAI to proceed.

Three scripts, one pattern

Each of the three smoke tests imports init_chat_model, creates a model with a different provider prefix, and calls invoke() with a short question. Spot the similarity — that is the whole point of LangChain.

00a_openai_smoke_test.py
from langchain.chat_models import init_chat_model    ①

model = init_chat_model("openai:gpt-4.1-mini", temperature=0)   ②
result = model.invoke("Say hi in 3 words.")                     ③
print(result.content)                                            ④
Single import for any provider — init_chat_model is provider-agnostic.
The "openai:" prefix tells LangChain which adapter to load behind the scenes.
invoke() is the synchronous single-call method. Same name for every provider.
Result is an AIMessage object; .content is the generated string.

Run them

$ python scripts/00a_openai_smoke_test.py
$ python scripts/00b_claude_smoke_test.py
$ python scripts/00c_gemini_smoke_test.py
Only 00a works? That is fine
You only strictly need OpenAI for the rest of this session. Re-run 00b and 00c later once you have your Anthropic and Google keys set up.

Common errors

Knowledge check

Knowledge Check
Why does the same LangChain code work with three different providers?
Recap — what you just learned
  • init_chat_model("provider:model-name") is the one-line entry point for any provider
  • invoke() returns an AIMessage with .content
  • Same code works on three providers — proven by these three smoke tests
Next up: 01 — Model Init & Params