Session 1· H10· 25 min

Mini CLI App — everything combined

What you'll learn
  • Dispatch multiple commands from a single CLI entrypoint
  • Reuse the text, vision, image, and TTS calls from earlier exercises
  • Ship a tiny but useful tool you can actually run on your own notes

What you will build

One script, four subcommands — summarize, ask-image, make-poster, speak. Each subcommand is literally a function you already wrote in H1, H6, H8, H9. This is the first script that feels like a real app.

Subcommand routing
python 10_…
entry
argparse
which subcommand?
summarize
text gen
ask-image
vision
make-poster
image gen
speak
TTS

New concept — argparse subparsers

Subparsers
argparse lets you define multiple sub-commands, each with its own arguments. "python app.py summarize --file x.txt" routes to the summarize handler, "python app.py speak --text Hi" routes to the speak handler.

Run each subcommand

$ python src/10_cli_mini_app.py summarize --file notes.txt
$ python src/10_cli_mini_app.py ask-image --image assets/sample1.png --question "Describe this"
$ python src/10_cli_mini_app.py make-poster --prompt "Workshop poster for AI beginners"
$ python src/10_cli_mini_app.py speak --text "Class completed successfully"
$ python src/10_cli_mini_app.py --help

Knowledge check

Knowledge Check
Why is it useful to keep all four subcommands in ONE script instead of four separate files?
Recap — what you just learned
  • argparse subparsers let one script dispatch to many features
  • Each subcommand can have its own required flags
  • All features share the same OpenAI client and .env setup
  • This is the pattern real CLI tools (git, docker, gh) are built on
Next up: H11 — Story Pipeline (capstone)