Skip to main content
The Substrate AI Assistant is not a generic code copilot. It is a simulation-aware agent that can see your 3D scene, read live telemetry, query the scene graph, and execute commands against a running simulation. Open the chat panel from the right sidebar to start a conversation.

How It Works

The assistant connects to your active simulation through the same bridge protocol that powers the rest of the IDE. It has access to specialized tools that let it observe and interact with the simulation environment directly.
  • Scene Graph — query the full entity hierarchy, including models, links, joints, and sensors
  • Telemetry Sampling — read recent values from any data channel with time-series history
  • Readiness Details — check which subsystems are online and which are still initializing
  • Topic Browser — discover all available data channels and their update rates
  • TF Tree — inspect the transform hierarchy between coordinate frames
  • Simulation State — combined readiness, telemetry, and background task status in one call
For the full tool inventory with parameters and examples, see the AI Tools Reference.

Vision and Recording

The assistant can capture screenshots and record video from the 3D viewer, then analyze them using Gemini vision. Screenshots are useful for quick visual checks — whether a model loaded correctly, an entity is in the right position, or a collision geometry looks correct. The assistant receives the image directly and can describe what it sees. Video recording captures the simulation window for up to 30 seconds. The assistant can then analyze the recording to observe behavior over time — stability during maneuvers, oscillations, drift, or unexpected collisions. Small recordings are analyzed frame-by-frame; larger files are uploaded to Gemini’s File API.
Try asking: “Record the simulation for 10 seconds while I run this script, then tell me if anything looks wrong.”
You can also paste or drag images directly into the chat input. The assistant will analyze them alongside any active simulation context.

Model Tiers

Standard

Claude Sonnet. Best for routine questions, parameter lookups, and straightforward debugging. Fast response times.

Max

Claude Opus. Use for complex multi-step reasoning, architecture decisions, and deep analysis of simulation behavior.

Output Styles

Control how verbose the assistant is with the /style command.
/style concise
/style explanatory
/style terse

Concise

Code-first, bullet points, max 3 sentences per explanation. Best when you know what you want.

Explanatory

Discusses design tradeoffs, references industry patterns, and wraps key points in insight blocks. Default style.

Terse

No prose. One-line confirmations: “Done.”, “Fixed.”, “Created X.” Only speaks when asked a question.

Skills

Skills are reusable prompt workflows invoked with a slash command. They inject structured instructions so the assistant follows a specific process instead of improvising.
/plan-fix
/explain-code
/review-code
SkillWhat It Does
/plan-fixStructured debugging: reproduce, isolate, fix, verify
/explain-codeWalk through purpose, patterns, edge cases, and dependencies
/review-codeCorrectness, security, performance, and style checklist
Type /skill in the chat to list all available skills with descriptions.

Snapshot and Rollback

Every file the assistant edits is automatically checkpointed. Each assistant message that modifies files shows a Revert button that restores all affected files to their exact state before that message — including handling newly created and deleted files. Snapshots are stored locally in your workspace using zstd compression. No configuration needed.
Revert operates on the file system directly and does not create git commits. It restores byte-for-byte content regardless of git state.

MCP Servers

The assistant can connect to external services through the Model Context Protocol (MCP). MCP servers expose tools the assistant can call alongside its built-in capabilities.

Built-in servers

Two servers are available out of the box. They activate automatically when credentials are configured in workspace settings:
ServerEndpointAuthentication
Linearmcp.linear.app/mcpOAuth (connect via settings) or LINEAR_API_KEY
GitHubapi.githubcopilot.com/mcp/GITHUB_PERSONAL_ACCESS_TOKEN

Adding custom servers

Create .substrate/mcp.json at your workspace root:
{
  "servers": {
    "my-server": {
      "transport": "stdio",
      "command": "npx",
      "args": ["-y", "@my-org/mcp-server"],
      "env": {
        "API_KEY": "env:MY_SECRET_NAME"
      },
      "description": "My custom tool server"
    }
  }
}
Launches a local process. Set command and optionally args and env.
{
  "transport": "stdio",
  "command": "npx",
  "args": ["-y", "@modelcontextprotocol/server-filesystem"],
  "env": { "ROOT": "/path/to/dir" }
}
Connects to a remote HTTP endpoint.
{
  "transport": "http",
  "url": "https://my-mcp-server.example.com/mcp"
}
Environment values prefixed with env: are resolved from workspace secrets — for example, "env:MY_API_KEY" reads the secret named MY_API_KEY rather than using the literal string.

Lifecycle Hooks

Hooks attach custom actions to agent lifecycle events. When a matching event fires, the hook runs its handler.

Events

SessionStart · SessionEnd

Handler types

TypeWhat it doesConfig
commandRuns a shell command and captures output{"command": "cargo check"}
promptInjects text into the agent’s context{"prompt": "Always run tests after edits"}
notifySends a notification to the user{"message": "Edit detected"}

Matchers

Hooks can optionally filter by tool name and file path:
{
  "event": "PostEdit",
  "handler_type": "command",
  "handler_config": { "command": "cargo test" },
  "matcher": {
    "tool": "Edit",
    "glob": "**/*.rs"
  },
  "priority": 10,
  "timeout_seconds": 30
}
Higher priority values fire first. Hooks are managed through workspace settings.
Example: A PostEdit hook with glob: "**/*.rs" that runs cargo test after every Rust file edit, so the assistant sees test failures immediately.

Permission Rules

Workspace administrators can define rules that control which tools the assistant can use and which paths it can access.

Rule types

TypeEffect
AllowPermit the action
DenyBlock the action (always wins, regardless of scope)
AskShow a confirmation dialog before proceeding

Scope hierarchy

Rules are evaluated with scope-based precedence. Organization rules outrank workspace rules, which outrank user rules:
ScopePriority Weight
Organization+100
Workspace+50
User+10

Path patterns

The pattern field supports glob matching with recursive **:
**/*.rs          # all Rust files
src/secrets/**   # everything under src/secrets/
*.env            # environment files at any level
Deny rules always win. If any matching deny rule exists, the action is blocked regardless of allow rules at a higher scope.
Permission rules are managed through workspace settings.

Context Awareness

The assistant automatically detects the running simulation, robot type, controller state, and system health. You do not need to explain your setup — ask questions directly.
The assistant reads simulation state on demand through tool calls. It does not continuously stream all telemetry. This keeps conversations focused and avoids unnecessary overhead.