Skip to main content

PixelDojo API Reference

Agent-first async REST API for AI image and video generation. 137 models currently available.

Onboarding path: create an API key, buy credits, discover a model, fetch its schema, submit a job, then poll or use webhooks. No subscription is required for API access.

Overview

Base URL
https://pixeldojo.ai/api/v1
Auth
Authorization: Bearer YOUR_API_KEY
Format
JSON request/response
Pattern
Async jobs: discover schema, submit via POST, poll via GET, replay webhooks when needed
Models
137 enabled (77 image, 59 video)

Authentication

All API requests require an API key sent as a Bearer token in the Authorization header. API keys are available to signed-in accounts, and usage is billed against prepaid credits.

Authorization: Bearer YOUR_API_KEY
Content-Type: application/json

Create API keys at /api-platform/api-keys. API access does not require a subscription. Buy credits at /api-platform/buy-credits.

Endpoints

GET/api/v1/models

Public discovery endpoint for all enabled image and video models.

Example Request:

curl "https://pixeldojo.ai/api/v1/models?detailed=true"

Example Response:

{
  "models": [
    {
      "apiId": "flux-1.1-pro",
      "name": "Flux 1.1 Pro",
      "description": "High-quality image generation with strong prompt adherence.",
      "modality": "image",
      "creditCost": {
        "default": 1,
        "type": "fixed",
        "amount": 1
      }
    }
  ],
  "total": 42,
  "imageCount": 25,
  "videoCount": 17
}
GET/api/v1/models/{apiId}

Fetch model capabilities, parameters, and the canonical request schema for one model.

Example Request:

curl "https://pixeldojo.ai/api/v1/models/flux-1.1-pro"

Example Response:

{
  "apiId": "flux-1.1-pro",
  "name": "Flux 1.1 Pro",
  "modality": "image",
  "requestSchema": {
    "type": "object",
    "additionalProperties": false,
    "required": [
      "prompt"
    ],
    "properties": {
      "prompt": {
        "type": "string",
        "description": "Text prompt"
      }
    }
  },
  "endpoints": {
    "run": "/api/v1/models/flux-1.1-pro/run",
    "schema": "/api/v1/models/flux-1.1-pro/schema"
  }
}
GET/api/v1/models/{apiId}/schema

Return the model request schema as JSON so agents and SDKs can build valid payloads.

Example Request:

curl "https://pixeldojo.ai/api/v1/models/flux-1.1-pro/schema"

Example Response:

{
  "apiId": "flux-1.1-pro",
  "name": "Flux 1.1 Pro",
  "modality": "image",
  "schema": {
    "title": "Flux 1.1 ProRequest",
    "type": "object",
    "additionalProperties": false,
    "required": [
      "prompt"
    ],
    "properties": {
      "prompt": {
        "type": "string",
        "description": "Text prompt"
      }
    }
  }
}
POST/api/v1/models/{apiId}/run

Submit an async job for any image or video model. Match the request body to the schema from /models/{apiId}/schema.

Example Request:

curl -X POST "https://pixeldojo.ai/api/v1/models/flux-1.1-pro/run" -H "Authorization: Bearer YOUR_API_KEY" -d '{"prompt": "A sunset", "aspect_ratio": "1:1"}'

Example Response:

{
  "jobId": "job_abc123",
  "status": "pending",
  "statusUrl": "https://pixeldojo.ai/api/v1/jobs/job_abc123",
  "creditCost": 1,
  "creditsRemaining": 99
}
GET/api/v1/jobs

List recent jobs for the authenticated API key owner with optional filters.

Example Request:

curl "https://pixeldojo.ai/api/v1/jobs?limit=10&status=completed" -H "Authorization: Bearer YOUR_API_KEY"

Example Response:

{
  "jobs": [
    {
      "jobId": "job_abc123",
      "apiId": "flux-1.1-pro",
      "status": "completed",
      "creditCost": 1,
      "refunded": false,
      "assets": [
        {
          "assetId": "job_abc123:image:0",
          "kind": "image",
          "url": "https://temp.pixeldojo.ai/...png",
          "apiId": "flux-1.1-pro",
          "jobId": "job_abc123",
          "expiresAt": "2026-03-17T12:00:00.000Z"
        }
      ],
      "webhook": {
        "configured": true,
        "delivered": true,
        "attempts": 1
      },
      "createdAt": "2026-03-17T11:59:00.000Z",
      "updatedAt": "2026-03-17T12:00:00.000Z",
      "expiresAt": "2026-03-18T12:00:00.000Z"
    }
  ],
  "total": 1,
  "filters": {
    "limit": 10,
    "status": "completed"
  }
}
GET/api/v1/jobs/{jobId}

Check job status and retrieve outputs, asset references, and webhook delivery state when complete.

Example Request:

curl "https://pixeldojo.ai/api/v1/jobs/job_abc123" -H "Authorization: Bearer YOUR_API_KEY"

Example Response:

{
  "jobId": "job_abc123",
  "apiId": "flux-1.1-pro",
  "status": "completed",
  "creditCost": 1,
  "refunded": false,
  "output": {
    "images": [
      "https://temp.pixeldojo.ai/...png"
    ]
  },
  "assets": [
    {
      "assetId": "job_abc123:image:0",
      "kind": "image",
      "url": "https://temp.pixeldojo.ai/...png",
      "apiId": "flux-1.1-pro",
      "jobId": "job_abc123",
      "expiresAt": "2025-01-23T12:00:00Z"
    }
  ],
  "webhook": {
    "configured": true,
    "url": "https://example.com/webhook",
    "delivered": true,
    "attempts": 1
  },
  "createdAt": "2025-01-23T11:58:00Z",
  "updatedAt": "2025-01-23T12:00:00Z",
  "expiresAt": "2025-01-23T12:00:00Z"
}
GET/api/v1/jobs/{jobId}/webhook

Inspect webhook configuration and delivery status for a job.

Example Request:

curl "https://pixeldojo.ai/api/v1/jobs/job_abc123/webhook" -H "Authorization: Bearer YOUR_API_KEY"

Example Response:

{
  "jobId": "job_abc123",
  "apiId": "flux-1.1-pro",
  "status": "completed",
  "webhook": {
    "configured": true,
    "url": "https://example.com/webhook",
    "delivered": true,
    "attempts": 1
  }
}
POST/api/v1/jobs/{jobId}/webhook

Redeliver the terminal webhook for a completed or failed job.

Example Request:

curl -X POST "https://pixeldojo.ai/api/v1/jobs/job_abc123/webhook" -H "Authorization: Bearer YOUR_API_KEY"

Example Response:

{
  "replayed": true,
  "job": {
    "jobId": "job_abc123",
    "apiId": "flux-1.1-pro",
    "status": "completed",
    "webhook": {
      "configured": true,
      "delivered": true,
      "attempts": 2
    }
  }
}

Available Models (137)

Also available programmatically: GET https://pixeldojo.ai/api/v1/models. This list updates automatically when models are enabled or disabled.

For any single model, fetch GET https://pixeldojo.ai/api/v1/models/{apiId}/schema to retrieve the canonical request schema before submitting a job.

Image Models (77)

Model IDNameCreditsDescription
change-camera-angleChange Camera Angle1Camera-aware editing via fal.ai Qwen Image Edit 2511 with multi-angle LoRA. 360° orbit, tilt, and zoom.
consistent-charactersConsistent Characters1Generate consistent character variations with FLUX Kontext, Nano Banana Pro/2, Flux 2 Dev, or Qwen Image 2 Pro.
creative-upscaleCreative Upscale0.5Clarity Upscaler (creative upscale) via Replicate. Boost detail with stable-diffusion refinement.
dreaminaDreamina 3.11ByteDance Dreamina 3.1. 4MP cinematic text-to-image with precise style control.
ernieErnie1Baidu Ernie text-to-image (fal.ai). Multilingual prompts and built-in prompt expansion.
face-enhanceFace Enhance2Crystal Upscaler via Replicate. Face-detail preserving upscale, cost scales with output megapixels.
fluxFLUX1FLUX family on Replicate. Schnell, Dev, Pro, Kontext, Ultra, and LoRA remix variants in one entrypoint.
flux-2-flexFlux 2 Flex1.5Max-quality with up to 10 reference images
flux-2-klein-4bFlux 2 Klein 4B0.1Very fast generation and editing with up to 5 reference images
flux-2-klein-9bFlux 2 Klein 9B0.54-step distilled FLUX.2 [klein] foundation model for flexible control
flux-2-proFlux 2 Pro1.5High-quality with up to 8 reference images
flux-2-maxFlux 2 Max2The highest fidelity image model from Black Forest Labs
flux-2-devFlux 2 Dev1Fast quality with up to 4 reference images
flux-2-loraFlux 2 Dev + LoRA1Dev model with custom LoRA support
flux-editFlux Edit (Kontext)1Black Forest Labs FLUX.1 Kontext for text-driven image editing. Dev (open-weight), Pro (state-of-the-art), and Max (premium typography).
flux-devFlux Dev1High-quality development model with configurable steps, guidance, and LoRA support.
flux-krea-devFlux Krea Dev1Photorealistic generation that avoids the oversaturated AI look. LoRA compatible.
flux-dev-multi-loraFlux Dev Multi LoRA1Supports multiple custom LoRAs simultaneously for complex style combinations.
flux-1.1-proFlux 1.1 Pro1Latest pro model with enhanced quality and strong prompt adherence.
flux-1.1-pro-ultraFlux 1.1 Pro Ultra1.5Highest quality Flux model with raw mode for natural-looking images.
flux-kontext-proFlux Kontext Pro1Advanced model with state-of-the-art performance for both generation and editing.
flux-kontext-maxFlux Kontext Max2Premium model with maximum performance and improved typography for generation and editing.
gemini-flashGoogle Gemini Flash1Fast generation with Gemini 2.5 Flash
nano-banana-proGoogle Nano Banana Pro3SOTA with accurate typography and reasoning
nano-banana-2Google Nano Banana 23Next-generation SOTA model with stronger consistency
google-nano-bananaNano Banana Edit3Google Nano Banana image editing. Multi-image fusion + edit instruction with Standard/Pro/Pro-fal tiers and 1K/2K/4K resolution.
gpt-image-lowGPT-Image 1.5 Low1Fast, lower detail generation
gpt-image-mediumGPT-Image 1.5 Medium1Balanced quality and speed
gpt-image-highGPT-Image 1.5 High4Maximum detail and quality
gpt-image-1-5-editGPT-Image 1.5 Edit4OpenAI GPT-Image 1.5 image editing — supply 1-8 reference images plus an edit instruction. Optional transparent background and high-fidelity input mode.
gpt-image-2GPT Image 25OpenAI GPT Image 2 via fal.ai — next-generation image model with 4K rendering and sharper text fidelity.
gpt-image-2-editGPT Image 2 Edit5OpenAI GPT Image 2 image editing — supply 1-8 reference images plus an edit instruction. Optional mask for inpainting. 4K-capable; pricing varies by quality + size.
hidream-l1-fastHiDream L1 Fast1HiDream L1 Fast - Fast generation
hidream-l1-devHiDream L1 Dev1HiDream L1 Dev - Fast generation
hidream-l1-fullHiDream L1 Full2HiDream L1 Full - Highest quality
hidream-e1.1HiDream E1.11HiDream E1.1 - Fast generation
hidream-editHiDream E1.1 Edit1HiDream E1.1 — image-conditioned editing with prompt + reference image. Single image input.
hunyuan-3dHunyuan 3D4Tencent Hunyuan 3D 3.1. Generate 3D meshes from a text prompt or a single image.
ideogram-characterIdeogram Character5Generate consistent characters from a single reference image in many styles.
image-editorCharacter Stylist1One-shot FLUX Kontext variants — filters, cartoonify, iconic locations, haircut swap, headshots, renaissance, face-to-many, and more.
image-relightingImage Relighting1Relight images with Magic Lighting, Nano Banana Pro/2, or Qwen Image Edit — multi-provider routing with per-model credit rates.
image-to-image-fluxFlux Image to Image1FLUX Dev LoRA image-to-image on Replicate. Prompt + source image + optional LoRA weights.
imagineartImagineart1.5Imagineart 1.5 Pro image generation (fal.ai).
kling-imageKling Image V31Kling Image V3 (fal.ai). High-quality text-to-image with flexible aspect ratios.
kling-image-editKling Image Edit1Kling Image V3 (fal.ai) image-to-image editing with a text instruction.
magnific-upscalerMagnific Upscaler3Freepik Magnific upscaler. Creative or precision mode, up to 16x.
openai-image-1OpenAI Image 11OpenAI GPT Image 1 Mini. Text-to-image via Replicate.
openai-image-1-editOpenAI Image 1 Edit1OpenAI GPT Image 1 Mini image editing — combine 1-8 reference images with a text edit instruction. Supports transparent or opaque backgrounds.
outpaintOutpaint1fal.ai Image Apps V2 outpainting. Expand an image beyond its original edges.
p-imageP-Image0.1Pruna P-Image. Sub-second text-to-image with optional custom dimensions.
p-image-editP-Image Edit0.25Pruna P-Image Edit. Fast image editing with up to 5 reference images.
ponyxl-ponyrealism-v23Pony Realism1Pony Realism - Stylized anime generation
ponyxl-tponynai3-v7Pony NAI1Pony NAI - Stylized anime generation
ponyxl-waianinsfwponyxl-v140Wai ANI1Wai ANI - Stylized anime generation
qwen-image-plusQWEN Image Plus1Fast generation with excellent quality
qwen-image-maxQWEN Image Max2Highest quality output
qwen-image-2.0QWEN Image 2.01Fast, balanced image generation and editing
qwen-image-2.0-proQWEN Image 2.0 Pro2Enhanced text rendering, realistic textures, and semantic adherence
qwen-image-2-editQwen Image 2 Edit1Alibaba DashScope Qwen Image 2 edit — supply 1-3 reference images plus an edit instruction. Standard and Pro variants.
qwen-image-editQwen Image Edit1Alibaba DashScope Qwen Image edit — supply 1-3 reference images plus an edit instruction. Plus and Max model variants.
recraft-v4Recraft V41Recraft's latest image model. Strong prompt accuracy, art-directed composition, integrated text rendering. Fast and cost-efficient at standard resolution.
recraft-v4-proRecraft V4 Pro6Recraft V4 at ~2048px resolution. Same design taste and prompt accuracy as V4, with higher resolution for print-ready and large-scale work.
recraft-v4-svgRecraft V4 SVG2Production-ready SVG vector images from text. Recraft V4's design taste applied to vector output — clean geometry, structured layers, editable paths.
recraft-v4-pro-svgRecraft V4 Pro SVG8Detailed SVG vector graphics from text. Recraft V4 Pro's design taste with more geometric detail and finer paths — clean layers, editable output, scalable to any size.
redux-fluxFlux Redux1Black Forest Labs Flux Redux image variations — feed a source image, get stylistic riffs.
seedream-3Seedream 31ByteDance Seedream 3 text-to-image via Replicate.
seedream-4Seedream 4.51ByteDance Seedream 4.5 — new-generation image creation with superior aesthetics, text rendering, and up to 4K resolution.
seedream-5-liteSeedream 5 Lite1ByteDance Seedream 5.0 Lite — fast, high-quality image generation and editing with strong aesthetics and text rendering.
wan-2.6-imageWAN 2.6 Image1Alibaba WAN 2.6 text-to-image with prompt enhancement and multi-image output.
wan-2.6-image-editWAN 2.6 Image Edit1Alibaba WAN 2.6 image editing. Up to 4 reference images.
wan-2.7-imageWAN 2.7 Standard1Faster Wan 2.7 image generation and editing
wan-2.7-image-proWAN 2.7 Pro2Higher quality Wan 2.7 tier with 4K support for text-to-image
wan-2.7-image-editWAN 2.7 Image Edit1Alibaba WAN 2.7 image editing. Standard and Pro tiers, supports 1-4 input images for fusion edits.
wan-imageWAN 2.2 Image1Fast cinematic image generation (3-6 seconds) with up to 4MP output and optional LoRA support.
xai-imageGrok Imagine1xAI Grok Imagine. Fast tier for quick iteration, Quality tier for higher fidelity at 1k or 2k.
xai-image-editGrok Image Edit1xAI Grok image editing. Sync response (no polling). Provide an image URL and a text edit instruction. Optional quality tier for 1k/2k high-fidelity edits.
z-image-turboZ Image Turbo0.5Super-fast 6B parameter text-to-image with great text rendering and LoRA support.

Video Models (59)

Model IDNameCreditsDescription
grok-r2vGrok Imagine R2V10xAI Grok Imagine reference-to-video via Replicate. 1 to 7 reference images plus prompt for 1 to 10 second clips at 480p or 720p.
grok-video-extendGrok Video Extend12xAI Grok Imagine video extension. Continue an existing MP4 with a prompt-directed extension (2 to 10 seconds).
hailuo-standardHailuo Standard8Premium quality text-to-video and image-to-video
hailuo-fastHailuo Fast4Fast image-to-video generation
happyhorse-1.0-r2vHappy Horse 1.0 Reference to Video4/secAlibaba Happy Horse 1.0 reference-to-video — multi-reference image input that preserves subject characters, driven by a text prompt. 720p / 1080p, 3-15 second clips.
happyhorse-1.0-t2vHappy Horse 1.0 Text-to-Video4/secText-to-video with 720p/1080p output and 2-15 second durations
happyhorse-1.0-i2vHappy Horse 1.0 Image-to-Video4/secImage-to-video animation with 720p/1080p output and 2-15 second durations
happyhorse-1.0-video-editHappy Horse 1.0 Video Edit4/secAlibaba Happy Horse 1.0 video edit — apply style transfer or local replacement to a source video using text prompts and optional reference images. 720p / 1080p, 3-15 second output.
heygen-avatarHeygen Avatar2/secHeygen Avatar 4 via fal.ai. Animate a portrait with prompt-driven speech or an audio track, with optional background and captions.
kling-motion-controlKling Motion Control v3 Standard3/secKling Video v3 Standard motion control endpoint
kling-motion-control-proKling Motion Control v3 Pro4/secKling Video v3 Pro motion control endpoint
kling-reference-to-videoKling Reference to Video15Kling O3 reference-driven video generation. Image or video references, Standard or Pro tier.
kling-v2-6Kling 2.6 Pro15Kling Video v2.6 Pro (fal.ai). Text-to-video or image-to-video, 5 or 10 seconds, with audio generation.
kling-video-v3-standard-textKling Video v3 Standard (Text)6/secStandard text-to-video with native audio
kling-video-v3-standard-imageKling Video v3 Standard (Image)6/secStandard image-to-video with native audio
kling-video-v3-pro-textKling Video v3 Pro (Text)8/secPro text-to-video with cinematic quality and native audio
kling-video-v3-pro-imageKling Video v3 Pro (Image)8/secPro image-to-video with cinematic quality and native audio
kling-video-editKling Video Edit40Kling O3 video-to-video edit. Standard or Pro, with optional reference images and audio preservation.
lip-syncLip Sync5Replicate sync/lipsync-2. Align mouth movements in a video to a separate audio track.
ltx-2-fast-t2vLTX 2.3 Fast Text-to-Video2/secFast text-to-video generation (6-20s, 1080p-2160p).
ltx-2-fast-i2vLTX 2.3 Fast Image-to-Video2/secFast image-to-video generation (6-20s, 1080p-2160p).
ltx-2-pro-t2vLTX 2.3 Pro Text-to-Video2/secHigher quality text-to-video generation (6-10s, 1080p-2160p).
ltx-2-pro-i2vLTX 2.3 Pro Image-to-Video2/secHigher quality image-to-video generation (6-10s, 1080p-2160p).
ltx-2-pro-extendLTX 2.3 Pro Extend Video2/secExtend an existing video clip from the start or end (1-20s, Pro tier only).
omnihumanOmniHuman 1.545ByteDance OmniHuman 1.5 via Replicate. Audio-driven talking-head video with lip sync.
p-videoP-Video0.5/secPruna P-Video — video generation with text/image/audio conditioning, draft mode, and 720p/1080p outputs.
p-video-avatarP Video Avatar1/secPruna P Video Avatar — animate a portrait into a talking avatar from a script or an audio file. 30 voices, 10 languages, 720p / 1080p.
pixversePixverse v5.67.5Pixverse v5.6 video generation via Replicate — text-to-video or image-to-video with optional audio, at 360p–1080p.
pixverse-v6Pixverse V610Pixverse V6 video generation via Runware. Text-to-video, image-to-video (start frame), or multi-clip (start + end frame).
runway-gen4-videoRunway Gen-4.5 Video15Runway Gen-4.5 video generation. Text-to-video or image-to-video, 5 or 10 seconds.
runway-videoRunway15Canonical version-agnostic Runway video API ID.
runway-gen4Runway Gen-4 (Legacy API ID)15Legacy alias for clients pinned to runway-gen4; maps to the current Runway model.
seedance-1.5Seedance 18ByteDance Seedance 1 video generation. Text-to-video or image-to-video with optional end frame.
seedance-2-highSeedance 2 High4/secHigher-quality Seedance 2.0 video generation (supports 1080p)
seedance-2-referenceSeedance 2 Reference to Video20Seedance 2.0 multimodal reference-to-video. Combine up to 9 images, 3 video clips, and 3 audio tracks to guide characters, motion, and sound.
seedance-video-editSeedance 2 Video Edit25Edit source videos with Seedance 2.0 using prompted changes, optional reference images, and 480p, 720p, or 1080p output.
text-to-musicText to Music2ElevenLabs Music via Replicate. Generate music from a text prompt.
veo-3.1-fastVEO 3.1 Fast3/secFaster generation at 3 credits per second
veo-3.1-standardVEO 3.1 Standard8/secHigher quality at 8 credits per second
veo-3.1-liteVEO 3.1 Lite1.5/secRunware-powered Lite variant at 1.5 credits/sec for 720p and 2 credits/sec for 1080p. No reference images, no audio generation, no 1:1 aspect ratio.
video-autocaptionVideo Autocaption5TikTok-style auto-captioning via Replicate.
video-reframeVideo Reframe8Luma Reframe Video via Replicate. Change a video's aspect ratio intelligently.
video-to-soundVideo to Sound2ThinkSound via Replicate. Generate a sound effect track from a video.
video-transformVideo Transform20Runway Gen4 Aleph via Replicate. Transform the first 5 seconds of a video with a prompt.
video-upscalerVideo Upscaler10Topaz Labs Video Upscale via Replicate. Upscale video resolution and FPS.
wan-2.2-standardWAN 2.2 Standard3Premium quality with enhanced detail
wan-2.2-plusWAN 2.2 Plus10Official Alibaba model with 1080p support
wan-2.2-extendedWAN 2.2 Extended1.2/secfal.ai WAN 2.2 with up to 10-second videos and dual LoRA support
wan-2.2-animateWAN 2.2 Animate2WAN 2.2 video animation. Drive a character image with a motion reference video.
wan-2.2-replaceWAN 2.2 Replace2WAN 2.2 character replacement. Swap a character in a source video while preserving scene and motion.
wan-2.6-standardWAN 2.6 Standard2.5/secHigher quality, 720p/1080p support
wan-2.6-flashWAN 2.6 Flash1/secFast and affordable image-to-video
wan-2.7-t2vWAN 2.7 Text-to-Video2.5/secText-to-video with audio sync, 720p/1080p output, and 2-15 second durations
wan-2.7-i2vWAN 2.7 Image-to-Video2.5/secImage-to-video and video continuation with optional last-frame control and audio sync
wan-reference-to-videoWAN Reference to Video4Alibaba WAN reference-to-video. Up to 5 image/video references with multi-shot support.
wan-video-character-swapWAN Video Character Swap20Alibaba WAN character swap. Combine a character image with a reference video to produce a new clip.
wan-video-editWAN 2.7 Video Edit6Alibaba WAN 2.7 video editing. Modify an existing clip via prompt with optional reference images.
xai-videoGrok Imagine Video10xAI Grok Imagine video. Text-to-video or image-to-video, 1-15 seconds at 480p or 720p.
xai-video-editGrok Video Edit15xAI Grok Imagine Video edit. Transform short clips via Replicate.

Response Format

Submit Response (202)

{
  "jobId": "job_abc123",
  "status": "pending",
  "statusUrl": "https://pixeldojo.ai/api/v1/jobs/job_abc123",
  "creditCost": 1,
  "creditsRemaining": 99
}

Completed Response (200)

{
  "jobId": "job_abc123",
  "apiId": "flux-1.1-pro",
  "status": "completed",
  "output": {
    "images": [
      "https://temp.pixeldojo.ai/pixeldojotemp/...png"
    ]
  },
  "assets": [
    {
      "assetId": "job_abc123:image:0",
      "kind": "image",
      "url": "https://temp.pixeldojo.ai/pixeldojotemp/...png"
    }
  ],
  "webhook": {
    "configured": true,
    "delivered": true,
    "attempts": 1
  },
  "creditCost": 1,
  "refunded": false,
  "createdAt": "2026-03-17T11:59:00Z",
  "updatedAt": "2026-03-17T12:00:00Z",
  "expiresAt": "2026-02-07T12:00:00Z"
}

Job Statuses

Control Plane

PixelDojo exposes agent-friendly control-plane routes for inspecting request schemas, listing recent jobs, and replaying terminal webhooks.

Installable Surfaces

PixelDojo is designed to be consumable by different kinds of agent runtimes. Use the surface that best matches your stack:

Error Codes

CodeHTTP StatusDescription
unauthorized401Missing or invalid API key
invalid_json400Invalid JSON in request body
validation_error400Input validation failed
not_found404Model or job not found
insufficient_credits402Insufficient credits
credit_error500Failed to deduct credits
submission_failed500Failed to submit job
expired410Job has expired
rate_limit_exceeded429Rate limit exceeded
internal_error500Internal server error

Code Examples

cURL

# Submit a job
curl -X POST "https://pixeldojo.ai/api/v1/models/flux-1.1-pro/run" \
  -H "Authorization: Bearer YOUR_API_KEY" \
  -H "Content-Type: application/json" \
  -d '{"prompt": "A sunset", "aspect_ratio": "1:1", "webhook_url": "https://example.com/webhook"}'

# Poll for results
curl "https://pixeldojo.ai/api/v1/jobs/job_abc123" \
  -H "Authorization: Bearer YOUR_API_KEY"

# List recent jobs
curl "https://pixeldojo.ai/api/v1/jobs?limit=10" \
  -H "Authorization: Bearer YOUR_API_KEY"

# Replay a terminal webhook
curl -X POST "https://pixeldojo.ai/api/v1/jobs/job_abc123/webhook" \
  -H "Authorization: Bearer YOUR_API_KEY"

Python

import requests
import time

API_KEY = "your_api_key"
BASE_URL = "https://pixeldojo.ai/api/v1"

model_schema = requests.get(
  f"{BASE_URL}/models/flux-1.1-pro/schema"
).json()

submit_response = requests.post(
  f"{BASE_URL}/models/flux-1.1-pro/run",
  headers={
    "Authorization": f"Bearer {API_KEY}",
    "Content-Type": "application/json",
  },
  json={"prompt": "A sunset", "aspect_ratio": "1:1", "webhook_url": "https://example.com/webhook"},
)
job = submit_response.json()

while True:
  status = requests.get(
    job["statusUrl"],
    headers={"Authorization": f"Bearer {API_KEY}"}
  ).json()
  if status["status"] in {"completed", "failed"}:
    print(status)
    break
  time.sleep(2)

JavaScript

const schema = await fetch("https://pixeldojo.ai/api/v1/models/flux-1.1-pro/schema");
const requestSchema = await schema.json();

const submit = await fetch("https://pixeldojo.ai/api/v1/models/flux-1.1-pro/run", {
  method: "POST",
  headers: {
    "Authorization": "Bearer YOUR_API_KEY",
    "Content-Type": "application/json"
  },
  body: JSON.stringify({
    prompt: "A sunset",
    aspect_ratio: "1:1",
    webhook_url: "https://example.com/webhook"
  })
});

const job = await submit.json();
let result;

do {
  const status = await fetch(job.statusUrl, {
    headers: { "Authorization": "Bearer YOUR_API_KEY" }
  });
  result = await status.json();
  if (result.status === "pending" || result.status === "processing") {
    await new Promise((resolve) => setTimeout(resolve, 2000));
  }
} while (result.status === "pending" || result.status === "processing");

console.log(requestSchema.schema, result.assets, result.webhook);

TypeScript

interface JobSubmitResponse {
  jobId: string;
  status: "pending" | "processing";
  statusUrl: string;
  creditCost: number;
  creditsRemaining: number;
  expiresAt: string;
}

interface JobAsset {
  assetId: string;
  kind: "image" | "video";
  url: string;
}

interface JobResult {
  jobId: string;
  apiId: string;
  status: "pending" | "processing" | "completed" | "failed";
  assets: JobAsset[];
  webhook: { configured: boolean; delivered: boolean; attempts: number };
  output?: { images?: string[]; video?: string };
  error?: string;
}

const submit = await fetch("https://pixeldojo.ai/api/v1/models/flux-1.1-pro/run", {
  method: "POST",
  headers: { 
    "Authorization": "Bearer YOUR_API_KEY",
    "Content-Type": "application/json"
  },
  body: JSON.stringify({
    prompt: "A sunset",
    aspect_ratio: "1:1",
    webhook_url: "https://example.com/webhook"
  })
});

const job: JobSubmitResponse = await submit.json();
const resultResponse = await fetch(job.statusUrl, {
  headers: { "Authorization": "Bearer YOUR_API_KEY" }
});
const result: JobResult = await resultResponse.json();

if (result.status === "completed") {
  console.log(result.assets);
}

Rate Limits

60 requests per minute across all endpoints.

X-RateLimit-Limit: 60
X-RateLimit-Remaining: 59
X-RateLimit-Reset: 1738800000

Best Practices

  1. Store API keys securely — never in client-side code or public repos
  2. Fetch /models/{apiId}/schema before generating dynamic payloads for a model
  3. Poll job status with exponential backoff (start at 2s, max 30s)
  4. Use asset references from job responses to track outputs across retries and orchestration steps
  5. Download outputs promptly — generated content expires after 24 hours
  6. Use seed for reproducible results across identical prompts
  7. Use webhook_url instead of polling for production workloads
  8. Handle rate limits gracefully with retry logic