Session 1· H1· 15 min
Basic Text Generation
What you'll learn
- ▸Load an API key from .env and instantiate the OpenAI client
- ▸Make your first responses.create() call
- ▸Pass prompts from the command line with argparse
Before you start
You should have finished the Getting Started pages: .venv activated, OPENAI_API_KEY in .env, and the Session1 folder open in VS Code. Prompt in your terminal should show (.venv).
What you will build
A one-file Python script that reads a prompt from the command line, sends it to an OpenAI text model, and prints the reply. This is the "hello world" of every LLM workshop — once it works, everything in the rest of Session 1 is a variation on it.
How a text-generation call flows
What happens on every call
Your script
builds a prompt
OpenAI()
sends HTTPS request
Model
generates tokens
response
JSON with text
print()
you see it
New concepts in this exercise
OpenAI client
An instance of the OpenAI class. It holds your API key and is the object you call methods on: client.responses.create(), client.images.generate(), client.audio.speech.create(). You create it once and reuse it.
responses.create()
The main method for text generation. You pass a model name and an input (string OR a list of role/content messages). It returns a Response object; response.output_text is the reply string.
argparse
Python's built-in library for reading command-line arguments like --prompt "hello". Each script in this session uses it so you can try different inputs without editing code.
The code — line by line
src/01_text_basic.py
from common import env_model, load_client, parse_text_prompt ①
def main() -> None:
args = parse_text_prompt() ②
client = load_client() ③
model = env_model("MODEL_TEXT", "gpt-4.1-mini") ④
response = client.responses.create( ⑤
model=model,
input=args.prompt, ⑥
)
print(response.output_text.strip()) ⑦
if __name__ == "__main__": ⑧
main()①Import three helpers from common.py. This keeps each exercise short.
②parse_text_prompt() reads the --prompt flag from the command line.
③load_client() runs load_dotenv() and returns OpenAI(api_key=...).
④env_model() reads MODEL_TEXT from .env, falling back to gpt-4.1-mini.
⑤The actual API call. Everything above is setup.
⑥args.prompt is the text you typed after --prompt on the command line.
⑦.output_text is the generated string. .strip() removes trailing whitespace.
⑧Python idiom: only run main() when this file is executed directly.
Run it
$ python src/01_text_basic.py --prompt "Give me 3 startup ideas for students"
The exact wording will differ from this every run — that is normal. LLMs are non-deterministic unless you set temperature=0.
How it works (plain English)
When you hit Enter, your script imports the OpenAI library, reads your key from .env, and wraps your --prompt into a JSON body that is posted to api.openai.com/v1/responses. OpenAI runs the chosen model on your prompt, streams the generated tokens back, and the SDK assembles them into response.output_text. That string is all your script prints.
Common errors
Try it yourself
- Change the prompt to ask for a haiku about your favourite city.
- Replace --prompt with a hardcoded string inside the script and see it still works.
- Print response.output_text WITHOUT .strip() and observe the extra whitespace.
Knowledge check
Knowledge Check
What does client.responses.create() return?
Code Check
What happens if you delete the line "client = load_client()" from the script?
Recap — what you just learned
- ✓The minimal OpenAI call has 3 parts: load .env → create client → call responses.create()
- ✓response.output_text is where the reply string lives
- ✓argparse lets you pass --prompt from the command line without editing code
- ✓common.py keeps boilerplate (key loading, arg parsing) out of each exercise