Available APIs
LLM uses these endpoints to fetch templates, node schemas, and model lists. Base URL is configured by UI/runtime, append /api/v1.
/api/v1/workflows/templates
Returns all template summaries: name, description, node count, edge count, tags, visibility. LLM uses it to recommend templates, pick by user intent, explain template purpose. To get full nodes and edges, call GET /api/v1/workflows/templates/{id} again, which returns definition.nodes and definition.edges.
/api/v1/workflows/templates/{id}
Returns a full template definition (including nodes and edges). Prioritize modifying based on templates rather than building from scratch.
/api/v1/nodes/types and /api/v1/nodes/schema/{nodeType}
Returns node types, categories, descriptions, input JSON Schema, output JSON Schema, i18n keys. LLM uses it to understand available nodes, how to configure, what input/output ports exist, which nodes can connect to which, and to generate node config JSON. POST /api/v1/nodes/validate can validate a single node config, but it's a protected write route.
/api/v1/llm/models
Returns models grouped by provider, including chat and embedding types. Used to select models for ai/llm, ai/llm_config, embedding/RAG nodes, or to fill model names when generating JSON. Note this endpoint currently doesn't read user settings, enabled defaults to false; complete user-level provider/credential info is in the protected GET /api/v1/llm/providers.
What LLM Can Do With These APIs
Five core capabilities + clear boundaries.
nodes + edges, then use /nodes/schema/{nodeType} to get each node's input/output schema. Check: does the edge's sourcePort exist in source node outputs, does targetPort exist in target node inputs, are data types roughly compatible, are there orphan nodes, nodes with no entry, cycle risks, or missing critical configs./nodes/types to find file upload, document-to-text, LLM, output nodes; calls /llm/models to select available models; generates CreateWorkflowRequest.definition (structure matches workflow_dto.rs line 189) with nodes + edges; returns importable JSON.GET /api/v1/workflows/{id}/llm-nodes to find LLM nodes, then PUT /api/v1/workflows/{id}/llm-nodes/{node_id} to update model/provider/config (code in workflow/mod.rs line 1510). Note: these are protected write routes requiring authentication.description, category, input/output schemas, LLM can translate complex workflows into human language: what each step does, how data flows, where credentials are needed, where failures may occur.Boundary Description
All APIs listed above are read-only discovery endpoints. LLM can analyze, recommend, and generate draft JSON, but cannot truly save.
To persist creation requires POST /api/v1/workflows; to update existing workflow requires PUT/PATCH /api/v1/workflows/{id}; to validate node config use POST /api/v1/nodes/validate. These protected routes require authentication.
Conversation Flow
LLM progresses through this flow when interacting with users, asking only questions that affect structure.
/nodes/types; when goal sounds like a common pattern call /workflows/templates; when LLM/RAG/summarization/classification/extraction is involved call /llm/models.typeId; configure from inputSchema; wire using source outputSchema.properties keys to target inputSchema.properties keys; generate UUID v4 for all IDs; use node names in user's language.json code block, no comments inside.JSON Structure and Rules
Generate export/import style workflow JSON. nodeCount / edgeCount match actual lengths.
{
"workflowId": "uuid-v4",
"name": "Workflow name",
"description": "What this workflow does",
"state": "draft",
"nodeCount": 0,
"edgeCount": 0,
"createdAt": "2026-04-26T00:00:00Z",
"updatedAt": "2026-04-26T00:00:00Z",
"version": 1,
"isFavorite": false,
"isTemplate": false,
"isDraft": true,
"tags": [],
"visibility": "private",
"definition": {
"nodes": [],
"edges": []
}
}
Node Format
nodeType uses the typeId returned by the API. Position approximately 320px apart left to right to maintain readability.
{
"nodeId": "uuid-v4",
"nodeType": "ai/llm",
"name": "LLM",
"position": { "x": 0, "y": 0 },
"config": {},
"inputs": {},
"outputs": {}
}
Edge Format
Port names come from the keys in schema properties.
{
"edgeId": "uuid-v4",
"sourceNode": "source-node-uuid",
"sourcePort": "output",
"targetNode": "target-node-uuid",
"targetPort": "input"
}
Wiring Rules
Use schema properties as port names for connections.
outputSchema.properties.inputSchema.properties.LLM Model Selection
Select model types by scenario when calling /llm/models.
config, or tell user to bind after import.Template Strategy
Modifying templates is more efficient than building from scratch.
/workflows/templates.Response Style
Be concise and practical during design, provide a summary before final output.
”I will generate this workflow: Input -> Processing -> Output. You will need to fill in after import: API Key / Knowledge Base ID / Webhook URL.”
json code block containing the complete importable JSON; 3) List of fields user may need to fill after import (if any).Quality Checklist
Verify each item before returning JSON.
/llm/models, or are clearly marked as placeholders.Simplified Instructions for LLM to Copy
Button copies the full skill.md, here is a quick preview.
Base URL: https://chengos.mysisshu.link
Read-only discovery APIs:
GET /api/v1/workflows/templates
GET /api/v1/workflows/templates/{id}
GET /api/v1/nodes/types
GET /api/v1/nodes/schema/{nodeType}
GET /api/v1/llm/models
Use these APIs to:
1) Analyze node wiring (check ports, types, orphans, cycles)
2) Create workflow JSON from natural language
3) Select LLM/embedding models from /llm/models
4) Generate from templates (list, read, modify, output)
5) Explain workflows and node capabilities in plain language
Generate one importable workflow JSON object for the user.
Note: saving to DB requires POST/PUT authenticated routes.