Getting Started with the Get3W API
Build your first AI generation pipeline in under 5 minutes. Step-by-step guide to the Get3W API with Python and JavaScript examples.
If you are searching for a practical Get3W API tutorial, this guide walks from zero to a working generation flow: account setup, authentication, your first task, polling or webhooks for results, and resilient error handling. The examples mirror real request shapes used on Get3W; for every parameter and edge case, see the full reference at get3w.com/docs.
Create an account and get an API key
- Sign up at get3w.com and complete any billing or verification steps required for your workspace.
- Open the API keys page (typically Access / API Keys in the dashboard) and create a new key with a clear name (for example,
prod-image-pipeline). - Copy the key once and store it in a secret manager or environment variable—never commit keys to git.
Important: Keys often need an active balance or top-up before they can call generation endpoints. If requests return 401 or payment-related errors, check billing and key status in the console first.
Local setup tip
Point GET3W_API_KEY at your key in development (.env files excluded from version control). In CI, inject secrets from your provider’s vault. Rotating keys periodically limits blast radius if a key leaks.
Your first API call
Get3W exposes a versioned REST API. The base path is https://api.get3w.com/api/v3. You authenticate with a Bearer token on every request.
Below, replace YOUR_API_KEY and the model path with the endpoint you want (image, video, audio, etc.—see list models on the docs site).
Python example
import os
import requests
API_KEY = os.environ["GET3W_API_KEY"]
BASE = "https://api.get3w.com/api/v3"
resp = requests.post(
f"{BASE}/get3w/flux-dev",
headers={
"Authorization": f"Bearer {API_KEY}",
"Content-Type": "application/json",
},
json={"prompt": "A cat in a space suit, cinematic lighting"},
timeout=60,
)
body = resp.json()
print(resp.status_code, body)
JavaScript (Node.js) example
const API_KEY = process.env.GET3W_API_KEY;
const BASE = "https://api.get3w.com/api/v3";
const res = await fetch(`${BASE}/get3w/flux-dev`, {
method: "POST",
headers: {
Authorization: `Bearer ${API_KEY}`,
"Content-Type": "application/json",
},
body: JSON.stringify({
prompt: "A cat in a space suit, cinematic lighting",
}),
});
const body = await res.json();
console.log(res.status, body);
A successful submission usually returns code: 200 with a prediction id and often a urls.get pointer for the result endpoint. Treat the HTTP status and the JSON code field consistently in your client.
Choosing a model endpoint
Models differ by modality (image, video, audio, 3D) and vendor. Call the list models API or browse the docs index to find the path segment after /api/v3/—for example, image models often look like /get3w/<model-id>. Start with a single stable model until your error handling and result pipeline are solid, then expand the matrix.
Handling results: polling
Many models run asynchronously. After submit, poll the result URL until the task reaches a terminal state.
Poll in Python
import time
task_id = body["data"]["id"] # from submit response
while True:
r = requests.get(
f"{BASE}/predictions/{task_id}/result",
headers={"Authorization": f"Bearer {API_KEY}"},
timeout=60,
)
data = r.json()["data"]
status = data["status"]
if status == "completed":
print("Outputs:", data.get("outputs"))
break
if status == "failed":
print("Error:", data.get("error"))
break
time.sleep(2)
Poll in Node.js
async function pollResult(taskId) {
while (true) {
const r = await fetch(`${BASE}/predictions/${taskId}/result`, {
headers: { Authorization: `Bearer ${API_KEY}` },
});
const json = await r.json();
const { status, outputs, error } = json.data ?? {};
if (status === "completed") return outputs;
if (status === "failed") throw new Error(error ?? "Task failed");
await new Promise((resolve) => setTimeout(resolve, 2000));
}
}
Typical statuses include pending, processing, completed, and failed. Back off polling if you hit rate limits (429).
Webhooks instead of polling
For production, webhooks reduce latency and API traffic: you register a callback URL, and Get3W notifies you when a prediction finishes. Configure webhook URLs and verification per the webhooks and verify webhooks sections on get3w.com/docs. Always verify signatures before trusting payloads, respond quickly with 2xx, and process work asynchronously in your handler.
Error handling
Design clients around HTTP status codes and the JSON envelope (code, message, data). Common cases:
| Situation | What to do |
|---|---|
401 | Rotate or fix the API key; confirm the Authorization: Bearer header. |
400 / 422 | Validate inputs against the model schema; check required fields and types. |
429 | Retry with exponential backoff; respect Retry-After if present. |
5xx | Retry idempotent reads; for submits, use idempotency or deduplication if your integration supports it. |
Log correlation IDs or prediction IDs when support is needed. Full error semantics are documented under error codes.
Timeouts and production clients
Set explicit read/connect timeouts on HTTP clients so hung connections do not exhaust worker pools. For long-running video jobs, separating “submit” from “poll” into different workers or queues keeps your API responsive. If you retry submits after a network failure, ensure you can detect duplicate tasks (same logical job id) to avoid double billing when the first request actually succeeded.
Next steps
- Browse get3w.com/docs for submit task, get result, uploads, and streaming.
- Add monitoring for failure rates, latency, and cost per model.
- Harden secrets (env vars, vaults) and restrict keys by environment.
With account setup, one POST, and either polling or webhooks, you have a minimal but production-shaped Get3W API tutorial path you can extend into full pipelines.