Generative Production

The study guide, presented

Foundations

The meta-skills every section refines.

What is Generative Production?

A new kind of collaborator — not a new camera.

Prompting Principles

Specificity beats cleverness.

Constrain to create. Iterate in small moves.

Iteration and Selection

The model is a coverage machine.

Your job is deciding what to keep.

Tracking Your Work

Prompt book. Git history. Same discipline.

Generative Media

Images, video, audio, spatial.

Concepts

Model basics · modalities · tools and vendors

Craft

Consistency, camera motion, temporal coherence — and when not to use AI.

Lifecycle Crosswalk

Dev · Pre · Production · Post · Delivery.

A crosswalk, not a spine.

Vibe-coding for Filmmakers

Code is generative production, too.

Foundations (Vibe-coding)

Agents · Code · Terminal · Git

What is an agent?

A model that takes actions, not just produces text.

  • Autocomplete — suggests next few lines; you stay in control.
  • Chat — ask, answer, copy into your project.
  • Agent — reads and writes your project directly.

Pick the shape that matches how much trust the task deserves.

Where does code actually run?

Local, cloud, hybrid — the difference matters more than it looks.

  • Local — fast, free, private, hardware-bound.
  • Cloud — scalable, costly, less private.
  • Even “local” agents often use cloud models on your behalf.

Terminal literacy

Agents live in the terminal — know a few basic commands.

  • cd · ls · cat · git
  • Read every command before approving.
$ cd ~/my-project
$ ls
README.md   src/   package.json

$ cat README.md
# My Project

Git literacy

The undo button for AI-generated code.

  • Remembers every version you commit.
  • Commit small, commit often.
  • Short message describing why.
  • Review before pushing.

Git literacy

The everyday loop.

$ git status
modified:  src/auth.js

$ git diff src/auth.js

$ git add src/auth.js
$ git commit -m "fix: handle expired tokens"
$ git push

Platforms

Where you actually write code with a model.

Platforms

A spectrum — hosted to local.

  • In-browser — Lovable · Bolt · v0 · Replit.
  • Native desktop — Claude (Anthropic) · ChatGPT (OpenAI).
  • AI-integrated IDEs — Cursor · Windsurf · VS Code + Copilot · JetBrains AI.
  • Command-line agents — Claude Code · Codex CLI · Aider.

More hosted = less setup. More local = more control.

Typical pricing

Most platforms cluster in a few tiers.

  • Free — limited usage, slower models, rate caps.
  • ~$20/month — Claude Pro · ChatGPT Plus · Cursor Pro · GitHub Copilot · Replit Core.
  • Usage-based API — pay per token (Anthropic, OpenAI, Google).
  • Team / enterprise — typically $50–100+/seat with admin features.

Confirm which tier your real workflow needs before committing.

Setup and Safety

The things that hurt you if you skip them.

Managing credentials

Password manager → .env.gitignore.

  • Real keys live in a password manager.
  • .env is a local cache; .env is gitignored.
  • Ship .env.example with blank values.
  • Set up .gitignore before adding a credential.

API keys, billing, spending caps

Treat the key like a credit card.

  • One key per project (blast-radius control).
  • Hard monthly spending cap at the vendor dashboard.
  • Email alerts at 50% and 90%.

A leaked uncapped key has cost beginners thousands in a day.

Repos and .gitignore

What never goes into a public repo.

  • Credentials (.env).
  • Build artifacts (node_modules/, _site/).
  • Generated media, large binaries.
  • Anything personal or secret.

git status before every commit.

Sandboxing

Trust the agent like a well-meaning intern.

  • Bounded project directory.
  • No full home or system access.
  • Review filesystem-touching commands.

Working with Models

Prompting, context, the right model for the job.

Coding models compared

Capability · context window · cost per token.

  • Claude (Opus / Sonnet / Haiku), GPT, Gemini, local open models.
  • Cheaper + three attempts ≠ cheaper.
  • Default: the most capable model in your budget tier.

Prompting for code

Specificity beats cleverness — especially here.

  • Name the language, framework, version.
  • Show example input → expected output.
  • Say what you already tried.

Anti-pattern: asking for a “complete implementation” with vague requirements.

Context management

The chat window is an environment, not a prompt box.

  • Everything the agent has read shapes the next response.
  • Rules files (CLAUDE.md, AGENTS.md) pin stable context.
  • Start a new chat when it drifts.

Skills

Reusable instruction packages for specific tasks.

  • “Review a PR.” “Create a migration.” “Debug a failing test.”
  • One step up from rules files — follow you across projects.
  • Build one after re-explaining a workflow for the third time.

A good skill is a recipe, not a personality.

MCP

Standard protocol for agent ↔︎ external tools.

  • Filesystem · database · browser · image APIs.
  • Without MCP, every integration is one-off.
  • Wire up when copy-paste becomes the bottleneck.

Agents vs chat vs autocomplete

Three shapes, three right jobs.

  • Autocomplete — routine code; saves keystrokes.
  • Chat — explanations, isolated functions.
  • Agent — multi-file changes, refactors, debugging.

Mixing them is normal. Using one shape for every job is wrong.

Prompting tips

Frame questions to invite critique, not agreement.

  • Models tuned to be helpful can be sycophantic.
  • Questions with a preferred answer tend to get confirmed, not examined.

Bad: “Is this a good idea?”

Better: “What are the pros and cons of this approach?”

Building Software

Sensible defaults for your first non-trivial project.

Reading API docs

Auth · endpoints · a working example.

  • REST + HTTP is the common vocabulary.
  • GET, POST, PUT, DELETE to a URL with JSON.
  • SDKs wrap the raw calls; raw HTTP works when no SDK exists.

Read the docs before asking the model — the agent invents plausible calls that don’t exist.

Reading API docs

A curl call shows all three at once.

$ curl https://queue.fal.run/fal-ai/nano-banana-2 \
    -H "Authorization: Key $FAL_KEY" \
    -H "Content-Type: application/json" \
    -d '{
      "prompt": "a cinematic golden-hour portrait",
      "aspect_ratio": "16:9"
    }'

{
  "request_id": "019db209-11c5-7762-9c58-14e378bdbe44",
  "status": "IN_QUEUE"
}

Frameworks vs rolling-your-own

Pre-built scaffolding vs inventing it yourself.

  • Frameworks — battle-tested patterns, shared vocabulary, community.
  • Rolling your own — freedom, at the cost of solved problems you resolve.
  • Beginner default: the framework almost always wins.

Frameworks vs rolling-your-own

Each one has thousands of projects behind it — the agent is fluent in all of them.

Rails · Ruby

Django · Python

Next.js · JavaScript

Laravel · PHP

Phoenix · Elixir

SvelteKit · JavaScript

Native development

iOS, Android, desktop — more powerful, more costly.

  • Price of admission to become a developer: Apple $99/year, Google Play $25, code-signing certs.
  • Approvals now become an issue — platform review can take days to weeks, and rejections happen.
  • Not your first project — ship on the web first.

Boring tech wins

Older stacks have millions of projects of training data.

  • Rails, Django, Next.js, plain Postgres, plain HTML.
  • The shiny framework from last quarter — thin training coverage.
  • The model improvises; it looks right, it isn’t.

Boring tech has docs, tutorials, Stack Overflow for when the model is wrong.

Dependencies and package managers

Someone else’s code your project uses.

  • npm, pip, bundler, cargo — the package manager installs and tracks.
  • Prefer well-known packages with active maintenance.
  • Pin versions in a lockfile.
  • Read what a package does before adding it.

Reading error messages

Read, hypothesize — then ask the model.

  • First line usually tells you what went wrong and where.
  • “I think X is happening because Y — is that right?”
  • Stack traces read bottom-up.
  • The real bug is near the top, in your code.

Common Traps

A named gallery of specific beginner mistakes.

Each trap is its own short page in the wiki.

Committing secrets to git

The single most common way beginners get burned.

  • Bots scrape public repos within minutes.
  • Set up .gitignore first, credentials after.
  • If leaked: rotate at the vendor — don’t just delete the commit.

Blindly running agent-suggested shell commands

Most are fine. Once in a while, catastrophic.

  • Read every command before approving.
  • Flag rm -rf, sudo, unexpected redirects.
  • Never run unfamiliar commands where you don’t want loss.

Force-push disasters

One flag, whole branch gone.

  • git push --force overwrites remote with your local state.
  • Never force-push to main.
  • Prefer --force-with-lease when you must.
  • To undo a commit: git revert, not force-push.

Letting the agent refactor without review

A 300-line diff you didn’t read.

  • Treat a large refactor as three small ones.
  • One change at a time; review each.
  • Ask for a summary before approving sweeping changes.

Accepting the first thing that compiles

“Compiles” ≠ “works.”

  • Try the empty input.
  • Try the wrong type.
  • Try the oversized input.
  • Try the obvious adversarial input.

Scope-creep prompts

You asked for one change. You got six.

  • Constrain the agent in the prompt explicitly.
  • “Change only the authenticate function; do not modify other files.”
  • If the agent oversteps: reject the whole change and re-prompt.

Building on sand

Unreliable deps, beta APIs, deprecated models.

  • A beta API changes its response format — your pipeline breaks.
  • A model gets deprecated — your shot no longer renders.
  • Know what’s unstable. Document it. Have a fallback.

Runaway API bills

Loops, retries, leaks — always set a ceiling.

  • Per-key spending cap at the vendor.
  • Per-request timeouts in your code.
  • Circuit breaker after N consecutive failures.
  • Email alerts above a threshold.

Set these up before you need them — 3am is too late.

Deploying to production without testing

Works on my machine, at scale.

  • A staging environment that mirrors production.
  • An automated test suite that runs before every deploy.
  • A deploy that’s reversible in one command.

Trusting a local-running model demo

A clever trick is not a pipeline.

  • A demo proves a look is possible.
  • Not that a pipeline is viable at a hundred shots.
  • Budget the real cost of scaling the approach before committing.

“Works on my machine” illusions

Reproducibility is a practice, not a hope.

  • Pin every version in a lockfile.
  • Document required env vars in .env.example.
  • Check in a working setup script.
  • Test the fresh-clone scenario before declaring a project done.

Practical

The non-creative stuff that sinks projects.