Commands & Skills
Commands are reusable prompt templates that you invoke in AI chat using the / slash syntax. They let your team codify repeatable workflows like bug triage, spec writing, standup summaries, and more — so anyone can get consistent, high-quality AI assistance without crafting the perfect prompt each time.
How Commands Work
- Type
/in the AI chat input to open the command palette - Select a command from the dropdown (or keep typing to filter)
- Add any additional context to your message
- Press Enter to send
The command's prompt is injected as system context for the AI, guiding its response. You can combine commands with regular text, file attachments, and references to people or records.
Referencing People and Records
Commands support three types of inline references that provide the AI with additional context:
| Syntax | What It Does | Example |
|---|---|---|
@member | References a team member — the AI receives their name, role, and organization | @alice |
#DISPLAY-ID | References a record — the AI receives the full record content | #TSK-001 |
/command | Chains another command — its prompt is included alongside the current one | /triage-issue |
These references work in both your chat messages and inside command prompt bodies. When the AI processes your message, all references are resolved into rich context automatically.
Built-in Commands
Every workspace comes with a set of built-in command templates:
| Command | What It Does |
|---|---|
/add-bug | Creates a structured bug report from your description |
/feature-request | Drafts a feature request with user stories and acceptance criteria |
/summarize-week | Generates a weekly progress summary across all project activity |
/triage-issue | Analyzes and triages an issue with priority and assignment recommendations |
/write-spec | Generates a technical specification from requirements |
/standup | Creates a daily standup update based on your recent activity |
Built-in commands cannot be edited directly. To customize one, click Duplicate to Customize to create your own version.
Creating Custom Commands
From the Operator App
Space administrators can create commands available to all workspace members:
- Open the Operator app from the sidebar
- Click the ✨ Commands button in the top toolbar to open the commands drawer
- Click + and select New Command
- Fill in the command details:
- Name: A descriptive name (e.g., "Sprint Retrospective")
- Slug: Auto-generated from the name (e.g.,
/sprint-retrospective) — this is what users type in chat - Description: A short summary shown in the command palette (max 120 characters)
- Instruction: A plain-English description of what you want the AI to do — the primary authoring surface
- Prompt Body: The engineered prompt sent to the AI. Leave blank to use the Instruction as-is, or use Build Prompt to generate one automatically.
- Set the visibility (Space, Organization, Team, or Private)
- Click Create Command
Instruction vs Prompt Body
| Field | Purpose |
|---|---|
| Instruction | Plain-English description of what the command should do. Written by you. Used as fallback when Prompt Body is blank. |
| Prompt Body | The full engineered prompt sent to the AI — with workspace references, structured instructions, and tone guidance. Can be hand-written or generated using Build Prompt. |
If Prompt Body is left blank, the Instruction is used directly. This means a command is immediately usable with just an Instruction — no prompt engineering required.
Writing Effective Prompts
The prompt body editor supports rich formatting:
- Text formatting: Bold, italic, underline, strikethrough
- Structure: Headings (H1-H3), bullet lists, numbered lists, blockquotes
- References:
@membermentions,#recordreferences,/commandchains
Tips for writing good command prompts:
- Be specific about the role: Start with "You are a [role]" to set context
- Define the output format: Describe what the response should look like
- Reference workspace resources: Use
#idor/commandto point the AI at relevant records, schemas, and playbooks — it will look them up at runtime rather than relying on static text - Keep it focused: One command should do one thing well
- Use chaining: Reference other commands with
/to compose complex workflows
Command Chaining
Commands can reference other commands using the / syntax in their prompt body. When a command is invoked, any referenced sub-commands are resolved recursively (up to 3 levels deep) and their prompts are included in the AI context.
For example, a /weekly-review command could reference /summarize-week and /triage-issue to combine their capabilities.
Advanced: Build Prompt Wizard
The Build Prompt button in the command editor runs an AI agent that explores your workspace and constructs a high-quality prompt automatically. The agent reads your databases, schemas, documents, and existing commands — and produces a prompt that references your specific workspace resources by ID rather than copying their content.
How It Works
- Write an Instruction and add Evaluation Criteria (see below)
- Click Build Prompt in the command editor header
- A readiness check scores your setup — the more criteria and sample inputs you provide, the better the output
- Click Build Prompt to start — the wizard runs 8 steps in sequence:
- Prepares your criteria and sample inputs (in parallel)
- Builds an initial workspace-anchored prompt
- Runs each sample through the prompt and scores the results
- Produces three refined candidates (incremental patch, structural redesign, clean-slate rewrite)
- Synthesizes feedback across all three into a prioritised recommendation list
- Review the three candidates — the highest-scoring one is pre-selected
- Read the synthesized feedback to understand what the AI recommends improving
- Click Apply selected prompt to write it to the Prompt Body field
Temperature Mode
Below the Instruction field, a Temperature dropdown controls how the AI balances creativity vs. groundedness:
| Mode | Behaviour |
|---|---|
| Whimsical | Fully creative; draws freely on imagination and general knowledge |
| Creative | Free to make inferences and suggest ideas beyond the source material |
| Balanced (default) | Balances workspace material with general knowledge; notes gaps |
| Grounded | Prefers workspace material; flags areas of uncertainty |
| Strict | Only states what can be directly confirmed from workspace resources |
The temperature setting applies to all run, score, and refine steps in the wizard — and is embedded in the generated prompt so the AI uses the same posture at execution time.
Advanced: Evaluation Criteria
Evaluation Criteria are binary (pass/fail) assertions used to judge the quality of prompt output — e.g. "Does the response reference the contract register?" or "Is the response under 250 words?".
They are used by both the Build Prompt Wizard (to score candidates during refinement) and the Autoresearch Loop (to score prompts across many iterations).
Adding Criteria
In the command editor, scroll to the Evaluation Criteria section:
- Click Add to type a criterion manually
- Click Auto-generate to let an AI agent propose 5–8 criteria based on your Instruction and workspace context
Good criteria:
- Are answerable YES or NO by reading the AI's response
- Assert what the response contains or does (not what the prompt says)
- Reference specific workspace materials the AI should consult
Sample Inputs
Sample inputs are example inputs you'd give to the command in real use. The Build Prompt Wizard and Autoresearch Loop use them to run the prompt against real-world scenarios and score the output.
To add a sample:
- Upload a file (PDF, Word, TXT, or Markdown) — the text is extracted automatically
- Or type the sample text directly
- Optionally add sample-specific criteria that only apply to that input
The more samples you provide (3+ is recommended), the more accurately the wizard can evaluate prompt quality.
Advanced: Autoresearch Loop
The Autoresearch Loop runs the prompt improvement process repeatedly in the background — useful when you want to explore many candidates without waiting. It uses the same scoring logic as the Build Prompt Wizard but runs autonomously for N iterations.
Starting a Loop
- Open a command and navigate to the Autoresearch panel (button in the Evaluation section)
- Optionally upload a sample input file to run real output evaluations
- Set the number of iterations (default: 3)
- Click Start Loop
The loop runs as a background job — you can close the browser and return later. Each iteration generates a new candidate prompt and scores it; the best result is tracked throughout.
Reading the Results
Results appear as a stepped line chart:
- Y axis: cumulative pass count against your criteria
- X axis: iteration number
- Hover a dot: see the candidate prompt, per-criterion pass/fail breakdown, and an Accept button
Click Accept on any candidate to apply it to the command's Prompt Body. The "Best Prompt" section below the chart shows the overall winner.
Managing Commands
Editing
Select any command from the list to edit its name, description, or prompt body. Changes take effect immediately for all users who can see the command.
Editing the slug will update references in other commands you have access to, but may break references in commands you cannot see.
Archiving and Deleting
- Archive: Soft-deletes the command. It will no longer appear in the command palette.
- Delete: Permanently removes the command and cannot be undone.
Access Control
Commands support granular access control:
- Space: Visible to all workspace members
- Organization: Visible to members of your organization
- Team: Visible to members of your team
- Private: Visible only to you
After creating a command, you can fine-tune access using the sharing controls.
Frequently Asked Questions
Can I use commands without the AI chat? Commands are currently available through the AI chat interface. External tools can also access commands via the MCP integration (Enterprise plan).
What happens if I reference a command that does not exist?
The /slug text remains in your message but is treated as plain text. The AI will not receive any special context for unresolved commands.
Is there a limit to how many commands I can create? There is no hard limit on the number of commands. Commands are available on plans that include AI capabilities.
Do I need to set up Evaluation Criteria before using Build Prompt? No — the wizard will generate fallback criteria if none are defined. However, the output is significantly better when you provide at least 3 criteria and 3 sample inputs.
What is the difference between Build Prompt and Autoresearch? Build Prompt runs a single end-to-end wizard that produces 3 polished candidates with structured feedback — best for one-shot prompt engineering. Autoresearch runs many lightweight iterations in the background — best when you want to explore a wider space of candidates over time.
Can I see what context the AI receives? The resolved system prompt is logged in the Convex function logs for debugging. Contact your workspace administrator for access.