I Automated My Entire Content Pipeline for $0/Month. Here's How.
I ship 20+ pieces of content per month across LinkedIn, Twitter, and this blog. Zero hours of manual posting. Zero dollars in platform fees.
The whole pipeline runs on free-tier APIs, SQLite, and YAML configs. Here's the stack, the architecture, and the honest limitations.
The Stack
LLM Routing: LiteLLM + Free Models
LiteLLM is a unified SDK that routes to 100+ LLM providers. I use it to build a fallback chain across free tiers:
# Tier 1: Fast, cheap tasks (routing, extraction)
Primary: Groq Llama 3.3 70B (free, stupid fast)
Fallback: Gemini Flash Lite (free, Google quota)
Fallback: DeepSeek Chat (free, solid quality)
# Tier 2: Content generation
Primary: Gemini 2.5 Flash (free, best quality/speed)
Fallback: DeepSeek V3 (free via OpenRouter)
Fallback: Groq Llama 70B (free, fast)
# Tier 3: Complex reasoning (rare)
Primary: DeepSeek Chat (free, 0.8 bench score)
Fallback: Kimi K2.5 (free via NVIDIA API)
Each tier has 3+ fallbacks across different providers. If Groq is rate-limited, it auto-tries Gemini. If Gemini times out, it tries DeepSeek.
Cost: $0. I stay under free-tier quotas by routing simple tasks to cheap models and batching requests.
Limitation: Rate limits. Groq gives you ~30 requests/min on free tier. If you're doing real-time chat, you'll hit limits. For batch content generation, it's fine.
Social Publishing: Round-Robin Free APIs
Instead of paying $30/mo for Buffer or Hootsuite, I built a round-robin across 3 free APIs:
- Zernio: 20 posts/month free
- Upload-Post: 20 posts/month free
- Ayrshare: 20 posts/month free
Total: 60 posts/month for free. SQLite tracks usage per API.
# Python connector
class SocialPublisher:
def publish(self, content, platforms):
# Get API with available quota
api = self.get_next_available()
if api == "zernio":
return self.zernio_publish(content, platforms)
elif api == "upload_post":
return self.upload_post_publish(content, platforms)
elif api == "ayrshare":
return self.ayrshare_publish(content, platforms)
def get_next_available(self):
# Query SQLite for usage counts
usage = db.query("SELECT api, count FROM social_usage")
for api, count in usage:
if count < limits[api]:
return api
return None # All quotas exhausted
Cost: $0 until you hit 60 posts/month. Then you pay or wait for the next month.
Limitation: Each API has slightly different capabilities. Zernio supports LinkedIn carousels. Upload-Post doesn't. You need adapter logic.
State Management: SQLite (WAL Mode)
Everything mutable goes in SQLite. No JSON files, no YAML state, no "let's just keep it in the LLM context."
# Workflow state
CREATE TABLE workflow_runs (
run_id TEXT PRIMARY KEY,
workflow_name TEXT,
status TEXT, -- running|completed|failed
current_step INTEGER,
state_json TEXT,
created_at INTEGER
);
# Checkpoints (for resume)
CREATE TABLE checkpoints (
checkpoint_id TEXT PRIMARY KEY,
run_id TEXT,
step INTEGER,
state_snapshot TEXT,
created_at INTEGER
);
# Social publish tracking
CREATE TABLE social_usage (
api TEXT,
month TEXT,
count INTEGER,
PRIMARY KEY (api, month)
);
Why SQLite? It's a single file, zero-config, supports concurrent writes (WAL mode), and has atomic transactions. Perfect for local-first agent systems.
Cost: $0. SQLite is public domain.
Limitation: Single machine only. If you need multi-machine, you need Postgres or distributed state. For solo operations, SQLite is perfect.
Workflow Engine: LangGraph
LangGraph is a state machine framework for agent workflows. You define nodes (steps) and edges (transitions). It handles checkpoint/resume, conditional branching, and human-in-the-loop gates.
from langgraph.graph import StateGraph
# Define state shape
class ContentState(TypedDict):
topic: str
outline: list
draft: str
edited: str
published: bool
# Build graph
graph = StateGraph(ContentState)
# Add nodes
graph.add_node("generate_outline", generate_outline_node)
graph.add_node("write_draft", write_draft_node)
graph.add_node("human_review", human_review_gate)
graph.add_node("publish", publish_node)
# Add edges
graph.add_edge("generate_outline", "write_draft")
graph.add_edge("write_draft", "human_review")
graph.add_conditional_edges(
"human_review",
lambda state: "publish" if state.approved else "write_draft"
)
graph.add_edge("publish", END)
# Compile and run
workflow = graph.compile(checkpointer=SqliteSaver(...))
The killer feature: checkpointing. Before every step, LangGraph saves state to SQLite. If the workflow crashes (API timeout, rate limit, power outage), you resume from the last checkpoint. No lost work.
Cost: $0. LangGraph is open source.
Limitation: Learning curve. The StateGraph abstraction takes a few hours to grok. But once you get it, you'll never go back to hand-rolled state machines.
Configuration: YAML Workflows
No hardcoded workflows. Everything is YAML configs that non-developers can edit.
# workflows/content_creation.yaml
name: content_creation
description: Generate article from topic
steps:
- name: research
type: ai
skill: web-researcher
input:
topic: "{{workflow.inputs.topic}}"
output: research_results
- name: outline
type: ai
skill: content-outliner
input:
topic: "{{workflow.inputs.topic}}"
research: "{{steps.research.output}}"
output: outline
- name: write_draft
type: ai
skill: content-writer
input:
outline: "{{steps.outline.output}}"
output: draft
- name: review
type: human_gate
message: "Review draft before publishing?"
- name: publish
type: connector
connector: social_publisher
input:
content: "{{steps.write_draft.output}}"
platforms: ["linkedin", "twitter"]
Change the workflow? Edit the YAML. No code deploy. The engine loads configs at runtime.
Cost: $0. YAML is a text format.
Limitation: YAML is not a programming language. Complex logic (loops, complex conditionals) gets ugly. For those cases, write a Python node.
The Architecture (ASCII Diagram)
┌─────────────────────────────────────────────────┐
│ User: "Generate content about AI agents" │
└────────────────┬────────────────────────────────┘
│
v
┌───────────────┐
│ CLI / Web UI │
└───────┬───────┘
│
v
┌───────────────┐
│ Workflow Eng. │ (LangGraph)
└───────┬───────┘
│
┌───────────┼───────────┐
v v v
┌─────────┐ ┌─────────┐ ┌─────────┐
│Research │ │Outline │ │Draft │ (AI nodes)
│ Node │ │ Node │ │ Node │
└────┬────┘ └────┬────┘ └────┬────┘
│ │ │
v v v
┌──────────────────────────────┐
│ LLM Router (LiteLLM) │
└──────────┬───────────────────┘
│
┌───────┼───────┐
v v v
┌─────┐ ┌─────┐ ┌──────┐
│Groq │ │Gemini│ │DeepSk│ (Free APIs)
└─────┘ └─────┘ └──────┘
After draft complete:
┌───────────────┐
│ Human Review │ (LangGraph interrupt)
└───────┬───────┘
│ (approved)
v
┌───────────────┐
│ Publish Node │
└───────┬───────┘
│
v
┌────────────────────────────────┐
│ Social Publisher (round-robin) │
└────────┬───────────────────────┘
│
┌─────┼─────┬─────┐
v v v v
┌────┐ ┌────┐ ┌────┐
│Zern│ │Upld│ │Ayrs│ (Free APIs)
└────┘ └────┘ └────┘
All state in SQLite:
┌────────────────┐
│ data/ │
│ business.db │ (workflow state)
│ checkpoints.db│ (resume data)
│ social.db │ (quota tracking)
└────────────────┘
What It Can Do
- Generate content from topics: "Write about prompt engineering" → full article in 5 minutes
- Cross-post to multiple platforms: One command, posts to LinkedIn + Twitter + blog
- Resume on failure: API times out? Resume from last checkpoint, don't restart
- Human review gates: Pause before publishing, let me edit, then continue
- Quota management: Auto-switches between free APIs to maximize monthly allowance
- Cheap model routing: Simple tasks use fast models, complex tasks use better models
What It Can't Do (Honest Limitations)
1. Real-Time High Volume
Free-tier rate limits are real. Groq: ~30 req/min. Gemini: ~60 req/min. If you need to process 1000 requests in a minute, pay for a tier.
For batch content generation (my use case), it's fine. For chatbots serving 100 concurrent users, it's not.
2. Video/Image Generation at Scale
Free image APIs (Segmind FLUX) give you 20 images/month. Not enough for daily social posts. You'll need to pay or use stock images.
Video generation APIs don't have meaningful free tiers. If you need video, budget for it.
3. Multi-User / Team Workflows
This stack is single-user. SQLite is single-machine. If you need 5 people collaborating on content, you need:
- Postgres instead of SQLite
- Auth + user management
- Web app instead of CLI
Doable, but no longer $0/month. You'll need hosting ($5/mo for a VPS).
4. Perfect Quality Output
Free models are good, not perfect. Gemini 2.5 Flash scores 0.9 on benchmarks. DeepSeek Chat scores 0.8. That's 90% as good as GPT-4, not 100%.
For most content, 90% is fine. For legal docs or medical advice, pay for the best model.
Cost-Quality Tradeoffs
Here's where free models fall short vs paid:
- Nuance: Free models sometimes miss subtle context. I edit ~20% of generated content before publishing.
- Consistency: Outputs vary more than GPT-4. Same prompt, slightly different results each time.
- Edge cases: Weird inputs break free models more often. GPT-4 handles gibberish gracefully. Llama 70B sometimes hallucinates.
The mitigation: iteration prompts. First draft from free model. Review. Send back with specific feedback. Second draft is usually solid.
Total time: 5 minutes for first draft + 2 minutes for review/iteration. Still faster than writing from scratch.
Monthly Quotas (Real Numbers)
Here's what I actually get per month on free tiers:
LLM Calls:
- Groq: ~6000 requests/month (30/min * 60 * 24 * 7, in practice)
- Gemini: ~10000 requests/month (free quota, generous)
- DeepSeek: Unlimited free via their API (for now)
Social Posts:
- Zernio: 20/month
- Upload-Post: 20/month
- Ayrshare: 20/month
- Total: 60/month
Images:
- Segmind FLUX: 20/month (need credits after)
Newsletter:
- Resend: 3000 emails/month free
- Mailchimp: 1000 contacts free (not using, Resend is better)
For a solo creator shipping 20 posts/month, this is plenty. For an agency shipping 200/month, you'll need paid tiers.
The Code (Open Source)
I'm not gatekeeping this. The whole system is open source:
- LangGraph workflows:
src/graphs/ - LLM router:
src/core/llm_router.py - Social publisher:
src/connectors/social.py - YAML configs:
src/departments/marketing/workflows/
Clone it. Break it. Improve it. The whole point of building in public is sharing what works.
Would I Pay for Better Tools?
Honestly? Maybe eventually.
Right now, free tier covers my needs. I'm shipping content, hitting publish quotas, not wasting time on manual posting.
If I scale to 100+ posts/month or need real-time workflows, I'll pay for:
- GPT-4 for quality-critical content
- Better image generation (Midjourney or DALL-E)
- Dedicated social publish API (Buffer/Hootsuite if quotas run out)
But for now? Free tier is plenty. The constraint isn't API quotas, it's my time to create ideas worth publishing.
The Bottom Line
You don't need a $50/mo SaaS budget to automate content. You need:
- LiteLLM for model routing (free)
- LangGraph for workflow orchestration (free)
- SQLite for state management (free)
- Round-robin free APIs for publishing (60 posts/month)
- YAML configs for flexibility without code changes
Total cost: $0/month. Total time saved: ~10 hours/month.
Build tools that pay for themselves through automation. Then use those tools to build more automation.
"The best automation stack is the one you actually build and use. Free tier forces you to design for efficiency."
Ship it. Use it. Share it.
Want more like this?
Weekly AI automation insights, frameworks, and practical tips. No fluff.