Integrations

MCP for ChatGPT and Codex

Connect ChatGPT or Codex to GenAsset over MCP so generated images can be saved with reusable metadata.

What it does

The MCP endpoint lets ChatGPT and Codex call GenAsset tools directly. This is the shortest path to your target flow: ask the model to generate an image, then call a tool to save it with prompt and metadata.

MCP endpoint
text
https://genasset.xyz/api/mcp
Current MCP tools
  • workspace_status
  • list_assets
  • save_image_to_genasset
  • load_version_recipe

Connect ChatGPT

In ChatGPT Developer Mode, add a custom MCP app and point it to the endpoint above. If your ChatGPT MCP client does not support bearer env vars yet, pass workspace_token in tool inputs.

First test: call workspace_status with your token and confirm your workspace name appears.

Connect Codex

From zero, use this exact setup:

Terminal setup
bash
codex --version

cat > ~/.genasset-mcp.env <<'EOF'
export GENASSET_WORKSPACE_TOKEN="ga_your_workspace_token"
EOF

source ~/.genasset-mcp.env

codex mcp add genasset \
  --url https://genasset.xyz/api/mcp \
  --bearer-token-env-var GENASSET_WORKSPACE_TOKEN

codex mcp list
Codex Desktop note (macOS)

If you use the GUI app, set the token in launchd so the app process can read it:

Set desktop env var
bash
launchctl setenv GENASSET_WORKSPACE_TOKEN "ga_your_workspace_token"

To remove it later:

Unset desktop env var
bash
launchctl unsetenv GENASSET_WORKSPACE_TOKEN
First test prompt
text
Call workspace_status with {}.
If bearer env is configured, token is picked up automatically.
Fallback: pass {"workspace_token":"ga_your_workspace_token"}.

Generate then save

This is the exact scenario: generate an image, then call save_image_to_genasset with asset name, token, and metadata.

Prompt pattern
text
Save this generated image to GenAsset.
asset_name: spring-character
prompt: keep the same character in a new rainy street scene
model: gpt-image-1
source: chatgpt
metadata: {"provider":"openai","repro_level":"partial"}
Image input formats

save_image_to_genasset accepts one image source: image_url, image_data_uri, or image_base64.

What data is saved

GenAsset stays ComfyUI-independent by keeping common fields for all sources and optional provider-specific metadata.

  • Always: image preview, asset identity, version, source, timestamp.
  • Usually: prompt, model, seed if available, intent, tags.
  • Optional: provider metadata, workflow_json, annotations, runtime notes.
Reproducibility level
ComfyUI runs often save full recipe fields. Closed image providers may not expose seed/sampler data. Mark these as partial replay in metadata when needed.

Model categories

Use simple categories so teams know what replay quality to expect.

Open-code

Model code and weights are openly inspectable. Usually highest reproducibility when pipeline settings are saved.

Open-weight (open-eight)

Weights are available, but hosting/runtime may vary. Good replay if key parameters are captured.

Closed-model

Provider-managed internals. GPT image generation belongs here. Save prompt + provider metadata and mark replay as partial.

OpenClaw readiness

OpenClaw readiness comes from the MCP contract itself: stable tool names, explicit schemas, and source-agnostic metadata. The same tools can be consumed by ChatGPT, Codex, Claude, Gemini, or OpenClaw-style clients.