How It Works
The assistant connects to your active simulation through the same bridge protocol that powers the rest of the IDE. It has access to specialized tools that let it observe and interact with the simulation environment directly.- Observation
- Control
- Vision
- Scene Graph — query the full entity hierarchy, including models, links, joints, and sensors
- Telemetry Sampling — read recent values from any data channel with time-series history
- Readiness Details — check which subsystems are online and which are still initializing
- Topic Browser — discover all available data channels and their update rates
- TF Tree — inspect the transform hierarchy between coordinate frames
- Simulation State — combined readiness, telemetry, and background task status in one call
For the full tool inventory with parameters and examples, see the AI Tools Reference.
Vision and Recording
The assistant can capture screenshots and record video from the 3D viewer, then analyze them using Gemini vision. Screenshots are useful for quick visual checks — whether a model loaded correctly, an entity is in the right position, or a collision geometry looks correct. The assistant receives the image directly and can describe what it sees. Video recording captures the simulation window for up to 30 seconds. The assistant can then analyze the recording to observe behavior over time — stability during maneuvers, oscillations, drift, or unexpected collisions. Small recordings are analyzed frame-by-frame; larger files are uploaded to Gemini’s File API. You can also paste or drag images directly into the chat input. The assistant will analyze them alongside any active simulation context.Model Tiers
Standard
Claude Sonnet. Best for routine questions, parameter lookups, and straightforward debugging. Fast response times.
Max
Claude Opus. Use for complex multi-step reasoning, architecture decisions, and deep analysis of simulation behavior.
Output Styles
Control how verbose the assistant is with the/style command.
Concise
Code-first, bullet points, max 3 sentences per explanation. Best when you know what you want.
Explanatory
Discusses design tradeoffs, references industry patterns, and wraps key points in insight blocks. Default style.
Terse
No prose. One-line confirmations: “Done.”, “Fixed.”, “Created X.” Only speaks when asked a question.
Skills
Skills are reusable prompt workflows invoked with a slash command. They inject structured instructions so the assistant follows a specific process instead of improvising.| Skill | What It Does |
|---|---|
/plan-fix | Structured debugging: reproduce, isolate, fix, verify |
/explain-code | Walk through purpose, patterns, edge cases, and dependencies |
/review-code | Correctness, security, performance, and style checklist |
Snapshot and Rollback
Every file the assistant edits is automatically checkpointed. Each assistant message that modifies files shows a Revert button that restores all affected files to their exact state before that message — including handling newly created and deleted files. Snapshots are stored locally in your workspace using zstd compression. No configuration needed.Revert operates on the file system directly and does not create git commits. It restores byte-for-byte content regardless of git state.
MCP Servers
The assistant can connect to external services through the Model Context Protocol (MCP). MCP servers expose tools the assistant can call alongside its built-in capabilities.Built-in servers
Two servers are available out of the box. They activate automatically when credentials are configured in workspace settings:| Server | Endpoint | Authentication |
|---|---|---|
| Linear | mcp.linear.app/mcp | OAuth (connect via settings) or LINEAR_API_KEY |
| GitHub | api.githubcopilot.com/mcp/ | GITHUB_PERSONAL_ACCESS_TOKEN |
Adding custom servers
Create.substrate/mcp.json at your workspace root:
stdio transport
stdio transport
Launches a local process. Set
command and optionally args and env.http transport
http transport
Connects to a remote HTTP endpoint.
env: are resolved from workspace secrets — for example, "env:MY_API_KEY" reads the secret named MY_API_KEY rather than using the literal string.
Lifecycle Hooks
Hooks attach custom actions to agent lifecycle events. When a matching event fires, the hook runs its handler.Events
- Session
- Tool Use
- File Operations
- Shell & Git
SessionStart · SessionEndHandler types
| Type | What it does | Config |
|---|---|---|
command | Runs a shell command and captures output | {"command": "cargo check"} |
prompt | Injects text into the agent’s context | {"prompt": "Always run tests after edits"} |
notify | Sends a notification to the user | {"message": "Edit detected"} |
Matchers
Hooks can optionally filter by tool name and file path:priority values fire first. Hooks are managed through workspace settings.
Permission Rules
Workspace administrators can define rules that control which tools the assistant can use and which paths it can access.Rule types
| Type | Effect |
|---|---|
| Allow | Permit the action |
| Deny | Block the action (always wins, regardless of scope) |
| Ask | Show a confirmation dialog before proceeding |
Scope hierarchy
Rules are evaluated with scope-based precedence. Organization rules outrank workspace rules, which outrank user rules:| Scope | Priority Weight |
|---|---|
| Organization | +100 |
| Workspace | +50 |
| User | +10 |
Path patterns
Thepattern field supports glob matching with recursive **:
Context Awareness
The assistant automatically detects the running simulation, robot type, controller state, and system health. You do not need to explain your setup — ask questions directly.The assistant reads simulation state on demand through tool calls. It does not continuously stream all telemetry. This keeps conversations focused and avoids unnecessary overhead.