MCP for ChatGPT and Codex
Connect ChatGPT or Codex to GenAsset over MCP so generated images can be saved with reusable metadata.
What it does
The MCP endpoint lets ChatGPT and Codex call GenAsset tools directly. This is the shortest path to your target flow: ask the model to generate an image, then call a tool to save it with prompt and metadata.
https://genasset.xyz/api/mcpworkspace_statuslist_assetssave_image_to_genassetload_version_recipe
Connect ChatGPT
In ChatGPT Developer Mode, add a custom MCP app and point it to the endpoint above. If your ChatGPT MCP client does not support bearer env vars yet, pass workspace_token in tool inputs.
workspace_status with your token and confirm your workspace name appears.Connect Codex
From zero, use this exact setup:
codex --version
cat > ~/.genasset-mcp.env <<'EOF'
export GENASSET_WORKSPACE_TOKEN="ga_your_workspace_token"
EOF
source ~/.genasset-mcp.env
codex mcp add genasset \
--url https://genasset.xyz/api/mcp \
--bearer-token-env-var GENASSET_WORKSPACE_TOKEN
codex mcp listIf you use the GUI app, set the token in launchd so the app process can read it:
launchctl setenv GENASSET_WORKSPACE_TOKEN "ga_your_workspace_token"To remove it later:
launchctl unsetenv GENASSET_WORKSPACE_TOKENCall workspace_status with {}.
If bearer env is configured, token is picked up automatically.
Fallback: pass {"workspace_token":"ga_your_workspace_token"}.Generate then save
This is the exact scenario: generate an image, then call save_image_to_genasset with asset name, token, and metadata.
Save this generated image to GenAsset.
asset_name: spring-character
prompt: keep the same character in a new rainy street scene
model: gpt-image-1
source: chatgpt
metadata: {"provider":"openai","repro_level":"partial"}save_image_to_genasset accepts one image source: image_url, image_data_uri, or image_base64.
What data is saved
GenAsset stays ComfyUI-independent by keeping common fields for all sources and optional provider-specific metadata.
- Always: image preview, asset identity, version, source, timestamp.
- Usually: prompt, model, seed if available, intent, tags.
- Optional: provider metadata, workflow_json, annotations, runtime notes.
Model categories
Use simple categories so teams know what replay quality to expect.
Open-code
Model code and weights are openly inspectable. Usually highest reproducibility when pipeline settings are saved.
Open-weight (open-eight)
Weights are available, but hosting/runtime may vary. Good replay if key parameters are captured.
Closed-model
Provider-managed internals. GPT image generation belongs here. Save prompt + provider metadata and mark replay as partial.
OpenClaw readiness
OpenClaw readiness comes from the MCP contract itself: stable tool names, explicit schemas, and source-agnostic metadata. The same tools can be consumed by ChatGPT, Codex, Claude, Gemini, or OpenClaw-style clients.