TweetSim API
Score and generate tweets from your code. Same engine the web app uses, accessible via REST. Authenticated by API key, free tier covers 20 scores/month.
Quick start
Get an API key (during open beta, request one by emailing isaiahdupree33@gmail.com with the label and tier you want). Then:
bashcurl -X POST https://tweetsim-api.onrender.com/api/v1/score \ -H "Authorization: Bearer tsk_your_key_here" \ -H "Content-Type: application/json" \ -d '{"text": "Most local businesses don'"'"'t have a lead problem. They have a follow-up speed problem."}'
Authentication
Pass your key in the Authorization header as Bearer tsk_xxx. Keys start with tsk_ followed by 32 url-safe characters.
Rate limits are per-key, per-month, reset on the 1st UTC. Hitting your limit returns 429 Too Many Requests with a clear message.
| Tier | Score / mo | Generate / mo | Price |
|---|---|---|---|
| free | 20 | 5 | $0 |
| builder | 500 | 100 | $15/mo (v1) |
| pro | unlimited | unlimited | $39/mo (v1) |
During open beta, paid tiers are minted manually. Stripe billing ships in v1 — see the roadmap.
Endpoints
POST /api/v1/score
Score a single tweet. Returns the same payload the web app renders: decision, pre-publish score, phoenix breakdown, composite dimensions, blockers, view-growth curve.
jsonPOST https://tweetsim-api.onrender.com/api/v1/score Authorization: Bearer tsk_xxx Content-Type: application/json { "text": "string (10-4000 chars)", "persona": "string (optional)", "want_panel": false }
Response:
json{ "decision": "publish" | "revise" | "reject", "pre_publish_score": 59.0, "phoenix_score": 73.9, "composite_score": 44.1, "audience_resonance": 0.34, "blockers": [], "phoenix_actions_top": [ {"action": "reply", "probability": 0.12, "contribution": 11.49}, ... ], "composite_dimensions": { "clarity": 8, "hook_strength": 7, "reply_trigger_score": 7, "annoy_risk": 2 }, "expected_engagement": null, "view_curve": { "status": "cold_start", "K": 659, "t_50_min": 29.2, "t_90_min": 60.6, "velocity_score": 0.52, "points": [ {"t_min": 5, "mean": 105, "lo80": 0, "hi80": 298}, ... ] }, "layers_used": ["phoenix", "thread_gate", "view_curve"] }
POST /api/v1/generate
Generate N ranked draft variants from an idea. Each variant includes a full score response.
jsonPOST https://tweetsim-api.onrender.com/api/v1/generate Authorization: Bearer tsk_xxx Content-Type: application/json { "idea": "string (8-2000 chars)", "persona": "string (optional)", "n_variants": 5, "post_format": "problem_diagnosis" | "build_in_public" | "tactical_checklist" | "contrarian" | null, "want_panel": false }
Response is { ok, n, ranked: [{rank, draft, sim}] }. Variants are sorted by pre_publish_scoredescending — pick rank 1 to publish.
GET /healthz
Public liveness probe. Returns engine version + ts module path. No auth required.
Worked examples
Score a tweet from the shell
bashcurl -sX POST https://tweetsim-api.onrender.com/api/v1/score \ -H "Authorization: Bearer $TSK" \ -H "Content-Type: application/json" \ -d '{"text":"Most local businesses don'"'"'t have a lead problem. They have a follow-up speed problem."}' \ | jq '{decision, pre_publish_score, top_blockers: .blockers, ramp: .view_curve.velocity_score}'
Score a tweet from Python
pythonimport os import requests API = "https://tweetsim-api.onrender.com" HEAD = {"Authorization": f"Bearer {os.environ['TSK']}"} def score(text: str) -> dict: r = requests.post(f"{API}/api/v1/score", headers=HEAD, json={"text": text}, timeout=15) r.raise_for_status() return r.json() result = score("Most local businesses don't have a lead problem. " "They have a follow-up speed problem.") print(result["decision"], result["pre_publish_score"])
Generate + pick the best variant from Node
javascriptconst API = "https://tweetsim-api.onrender.com"; const TSK = process.env.TSK; async function generateBest(idea) { const r = await fetch(`${API}/api/v1/generate`, { method: "POST", headers: { "Authorization": `Bearer ${TSK}`, "Content-Type": "application/json" }, body: JSON.stringify({ idea, n_variants: 5 }), }); if (!r.ok) throw new Error(`generate failed: ${r.status}`); const { ranked } = await r.json(); return ranked[0]; // rank-1 = highest pre_publish_score }
Errors
| Status | Meaning | What to do |
|---|---|---|
| 200 | Success | Use the response |
| 400 | text too short / too long / missing | Fix the payload |
| 401 | missing or invalid API key | Check Authorization header + key validity |
| 429 | monthly quota exhausted | Wait until 1st UTC or upgrade |
| 500 | engine failure (rare) | Retry with backoff; report if persistent |
| 502 | upstream LLM provider error (generate only) | Retry |
Honest caveats
- The view_curve runs on a cold-start prior for most accounts (status="cold_start" in the response). Real per-account curves require ~30+ tweets with multi-snapshot metrics history. The shape is sane; the absolute t_50/velocity numbers reflect a generic baseline until then.
- Scoring is deterministic on input; the only non-determinism is in the view_curve's 80% CI ribbon (Monte Carlo). The mean/K/t_50/t_90/velocity are stable.
- The engine is open source (vendored at github.com/IsaiahDupree/tweetsim-api/tree/main/vendor). You can read every line that produced a given score.
- Rate limits are not real-time. Quota counters are eventually consistent within ~1 second; bursting won't be perfectly enforced. Hard cap is per-month.