Session 0· 02· 10 min
ML vs LLM — what actually changed
What you'll learn
- ▸Contrast the classical ML workflow with the LLM workflow
- ▸Understand why LLMs skip feature engineering and task-specific training
- ▸Know when classical ML still wins
Classical machine learning and LLMs are both "AI" — but the developer workflow could not be more different. In classical ML you spend weeks collecting data, labelling it, tuning features, and training a model for ONE task. With an LLM you write a prompt, call an API, and read the answer.
Classical ML workflow
What you do for classical ML
Collect data
1,000s of rows
Label it
By hand, often
Feature eng.
Craft inputs
Train model
Hours–days
Evaluate
Accuracy, F1…
Deploy
Per task
LLM workflow
What you do with an LLM
Write prompt
Plain English
Call API
1 HTTP request
Read reply
Text in, text out
Iterate
Tweak the prompt
The shift
Classical ML builds a new model per task. An LLM is one giant pre-trained model that generalises to many tasks, controlled through natural-language prompts.
Classical ML
Task-specific
- •Needs labelled training data
- •Built for ONE task (spam? churn? fraud?)
- •Requires ML expertise
- •Deterministic, explainable (usually)
- •Deployable offline, cheap at inference
- •Months to build from scratch
LLM
General-purpose
- •Needs ZERO training data (uses pre-trained weights)
- •One model handles many tasks via prompts
- •Accessible to any developer
- •Non-deterministic, harder to explain
- •API call costs money per request
- •Hours to build a working prototype
Side-by-side capability matrix
| Classical ML | LLM | |
|---|---|---|
| Needs labelled training data | ||
| Works without domain experts | ||
| Handles free-form text input | ||
| Deterministic output | ||
| Good for tabular numbers | ||
| Good for unstructured text / vision | ||
| Explains its decisions | ||
| Runs offline, cheap at inference | ||
| Handles multi-step reasoning |
When classical ML still wins
Need to score 50 million rows per day? Classical ML is orders of magnitude cheaper at that volume. Got a purely tabular dataset (sales forecast, credit risk)? A gradient boosted tree beats any LLM. LLMs shine on language, code, and unstructured data — not everywhere.
Key takeaway
LLMs are not a replacement for ML; they are a new tool that dramatically shortens the path from idea to working prototype for language-heavy tasks.