Why LangChain? (read this first)
- ▸Understand what problems LangChain actually solves
- ▸Compare raw-SDK code (Session 5) with LangChain code for the same task
- ▸Know when LangChain is the right tool — and when it is not
The problem: you built it with raw SDKs, now what?
Imagine your boss walks over and says three things: "1) The OpenAI bill is high — can we try Claude for 30% of traffic? 2) We also want to try Gemini for long documents. 3) And by the way, we need streaming tokens for the UI." With raw SDKs, each of those is a rewrite — different client classes, different method names, different response shapes, different tool formats. Multiply that by three providers and your head hurts.
The LangChain answer: one interface, many providers
LangChain is an abstraction layer. You write your code once against its common interface (invoke, stream, batch, bind_tools, with_structured_output) and swap the provider with a single line. Same code, same method names, same response shape — whether the underlying model is GPT, Claude, or Gemini.
- •client = OpenAI(api_key=…)
- •client.chat.completions.create(…)
- •response.choices[0].message.content
- •tools parameter: JSON schema dict
- •response_format: json_schema dict
- •Swap provider = rewrite
- •model = init_chat_model("openai:gpt-4.1-mini")
- •model.invoke("hello")
- •result.content
- •@tool decorator + bind_tools([func])
- •with_structured_output(PydanticModel)
- •Swap provider = one string change
Side-by-side: the same feature in both eras
Look at structured output from Session 5 lesson 17 vs Session 6 lesson 08. Both return a typed Pydantic Movie object — but notice the provider-agnostic surface of the LangChain version.
from openai import OpenAI
from pydantic import BaseModel
class Movie(BaseModel):
title: str
year: int
client = OpenAI()
response = client.responses.parse(
model="gpt-4o-2024-08-06",
input=[{"role": "user", "content": "Favourite movie?"}],
text_format=Movie,
)
movie = response.output_parsed
print(movie.title)from langchain.chat_models import init_chat_model
from pydantic import BaseModel
class Movie(BaseModel):
title: str
year: int
# Swap "openai:gpt-4.1-mini" to "anthropic:claude-sonnet-4-5"
# or "google_genai:gemini-2.0-flash" — nothing else changes.
model = init_chat_model("openai:gpt-4.1-mini", temperature=0)
movie = model.with_structured_output(Movie).invoke("Favourite movie?")
print(movie.title)What LangChain actually gives you
- Unified method surface — invoke(), stream(), batch(), with_structured_output(), bind_tools() are identical across providers
- Tool definition from plain Python — the @tool decorator auto-generates schemas from type hints and docstrings (no hand-written JSON schema)
- Pydantic everywhere — structured output, tool arguments, retrieval all accept Pydantic classes directly
- Provider prefix init — init_chat_model("openai:…") / "anthropic:…" / "google_genai:…"
- Large ecosystem — prompt templates, retrievers, memory, caching, callbacks, LangSmith tracing
When NOT to use LangChain
- •You want to support multiple providers
- •You want quick tool calling and structured output with minimal boilerplate
- •You need retrievers, memory, prompt templates, or LangGraph agents
- •You value the ecosystem of integrations over full SDK control
- •You are locked to a single provider anyway
- •You need a day-one feature LangChain has not wrapped yet
- •You want minimal dependencies and maximum debuggability
- •You are shipping a high-volume production system that needs fine-grained control
The Session 6 arc in one picture
The 10 exercises that follow all use the same init_chat_model("openai:…") entry point. Everything you learned in Session 5 transfers conceptually — you are just meeting the LangChain version of the same patterns.
Knowledge check
- ✓LangChain is an abstraction layer that gives you ONE interface across many LLM providers
- ✓Same code runs on OpenAI, Anthropic, and Google with a one-string change
- ✓Structured output and tool calling get 2x shorter with decorators and with_structured_output
- ✓Use raw SDKs when you are locked to one provider or need brand-new features
- ✓Every Session 6 exercise from here on uses init_chat_model("openai:…") as the starting point