Skip to main content
Back to API Platform

LLM.txt Documentation

AI-Ready

Structured markdown documentation optimized for AI assistants and agent runtimes

What is LLM.txt?

AI-optimized documentation for seamless integration

LLM.txt is a structured markdown file designed specifically for AI assistants like Claude, GPT, and other large language models. Instead of navigating complex documentation, you can paste the LLM.txt content into your AI conversation or runtime and get instant, accurate help with API integration and model selection.

Complete API Reference

All endpoints, schemas, and response formats in one place

Agent Reasoning Context

Useful for model selection, endpoint planning, and deciding when to poll vs webhook

Tooling Companion

Pairs with OpenAPI and per-model schema routes for SDK, CLI, and MCP-style tools

How to Use LLM.txt

Three simple ways to leverage AI-powered documentation

Method 1

Copy and Paste

Open the LLM.txt file, copy its contents, and paste it into your conversation with Claude, GPT, or any AI assistant. Then ask questions like "Which model should I use?", "What schema does this model accept?", or "Show me Python code to create a video."

https://pixeldojo.ai/llm.txt
Method 2

Fetch Programmatically

Use the LLM.txt as context in your AI-powered applications or alongside the schema/control-plane endpoints:

Python
import requests

# Fetch the documentation
llm_docs = requests.get("https://pixeldojo.ai/llm.txt").text

# Use with OpenAI
from openai import OpenAI
client = OpenAI()

response = client.chat.completions.create(
    model="gpt-4",
    messages=[
        {"role": "system", "content": f"You are a helpful assistant..."},
        {"role": "user", "content": "How do I generate an image?"}
    ]
)
Method 3

Per-Model Documentation

Each model has its own LLM.txt file with focused documentation. Use this when you only need info about a specific model:

/llm/{model-id}.txt

Example: /llm/flux-1.txt, /llm/kling-v2-5.txt

Per-Model LLM Documentation

Click any model to view its dedicated LLM.txt file

Loading models...

Recommended Agent Tooling Shape

How LLM docs fit into SDK, CLI, and MCP-style integrations

Use /llm.txt and per-model docs as the reasoning layer, then pair them with /api/openapi and /api/v1/models/{apiId}/schema for exact request construction.

This pattern works well for three install surfaces: generated SDKs with helper docs, CLIs that need human-readable guidance, and MCP-style tools that let an agent call list_models, get_model_schema, generate_media, and list_jobs.