2024-01 → 2024-09
Contento — LLM Script Generator
AI-powered SaaS MVP shipped solo. End-to-end script generation pipeline with prompt orchestration, structured output, and a Vue/Nuxt frontend. Backend in FastAPI + PostgreSQL + Celery.
- Rol
- Sole engineer — architecture, implementation, release
- Stack
- FastAPI · PostgreSQL · Celery · Vue/Nuxt · OpenAI
- Período
- 2024-01 → 2024-09
Este caso de estudio aún no está traducido a tu idioma — mostrando el original en inglés.
What it does
Content marketers feed a campaign brief and a target audience description. Contento returns a structured, ready-to-shoot script: hook, segments, calls-to-action, B-roll cues. The output is JSON, schema-validated, and directly importable into the team's video production tool.
The interesting bits of the build
Prompt orchestration, not single prompts. A pipeline of small, focused prompts — each with a tightly-scoped responsibility — was an order of magnitude more reliable than a single mega-prompt asking for the whole script. Pipeline stages: brief normalization → audience modeling → hook generation → outline → expansion → review pass. Each stage's output is validated before the next runs.
Structured output everywhere. Pydantic models on the backend, OpenAI function calling for the generation steps. Free-form text only at the user boundary. This was the difference between "AI demo" and "tool I can rely on."
Background job queue from day one. Celery + Redis for the generation pipeline. The UI submits, the worker runs, the frontend polls (later upgraded to SSE). Generation latency stopped being a UX problem.
Solo engineering discipline. No team to catch design mistakes — so the architecture had to be conservative: typed APIs, contract tests, deployable infrastructure-as-code from week one. I would rather ship a smaller surface that I can extend than a wider surface I can't trust.
What I'd do differently now
Contento predates the maturity of agent frameworks. If I built it today the orchestration layer would be agent-based (planner + per-stage executors), not a hand-written pipeline. The architectural shape is the same; the implementation gets shorter every year.