Skip to main content

Using the Chat Window

Opening Orion

Three ways to open the Orion chat window:

  1. Window menu: Window > Azimuth Orion (most common)
  2. Toolbar button: Look for the Orion icon in the main editor toolbar (if configured)
  3. From Blueprint editor: Orion toolbar sends will auto-open the window

Screenshot Placeholder: [Main editor toolbar showing Orion icon button location]

The chat window is dockable—drag the tab to any position in your editor layout, or float it as a separate window.

Chat Interface Tour

The Orion chat window has five main areas:

flowchart TB
subgraph ChatWindow [Orion Chat Window]
Toolbar["Top Toolbar: Mode dropdown | Settings gear"]
MessageList["Message List: Conversation history<br/>(user messages + AI responses)"]
InputBox["Input Box: Type your question here"]
SendButton["Send Button: Submit query<br/>(becomes Stop while processing)"]
BottomBar["Bottom Bar: Retry | Clear chat"]
end

Screenshot Placeholder: [Annotated screenshot of Orion chat window with arrows/labels pointing to: (1) Mode dropdown in top-left, (2) Settings gear in top-right, (3) Message list in center showing sample conversation, (4) Input box at bottom, (5) Send button next to input, (6) Retry and Clear buttons]

Asking Questions

Using Orion is simple. Type your question in the input box at the bottom, click Send (or press Enter), and wait for the response. The Send button changes to "Stop" while processing—you can click it to cancel the request. The AI response appears in the message list with formatting (bold, code blocks, headers). Example questions you might ask: "How do I create an array of actors in Blueprint?", "Explain how Blueprint interfaces work", "What's the difference between Tick and Timer?", "Show me how to implement a health system in Blueprint", or "Best practices for multiplayer actor replication." A loading indicator may appear while the request is in progress.

Screenshot Placeholder: [Chat window showing a sample question "How do I create an array of actors in Blueprint?" followed by an AI response with formatted node names and execution flow]

Screenshot Placeholder: [Chat window with Stop button visible (replaces Send) and loading indicator while query is in progress]

Modes: Orbital and Deep Scan

Orion has two query modes that adjust how the AI responds to your questions. Think of them as different "lenses" the AI uses.

Mode dropdown location: Top-left corner of the chat window

Screenshot Placeholder: [Mode dropdown expanded, showing two options: "Orbital (Ask)", "Deep Scan (Blueprint)" with radio buttons]

Orbital (Ask Mode) — Default

Orbital is best for general questions about Unreal Engine, learning new concepts, exploring broad topics, and when you're not sure where to start. The AI uses broad UE5 knowledge with accessible language and less technical jargon. Example questions: "How do I make a pickup item?", "Explain the difference between Actor and Pawn", "Best practices for inventory systems", or "How does level streaming work?"

Deep Scan (Blueprint Mode)

Deep Scan is best for technical Blueprint analysis, debugging Blueprint logic, optimization suggestions, and understanding complex node graphs. The AI uses highly technical, node-level detail and Blueprint-specific knowledge. Example questions: "Analyze this graph for performance issues", "Find unused variables", "Suggest Blueprint refactoring", or "Why isn't this event firing?" Deep Scan mode is automatically selected when you use the Blueprint toolbar or right-click menu analysis features.

Switching modes: Click the mode dropdown anytime. The change takes effect on your next query (it doesn't change previous messages). No need to restart the editor or clear chat.

Message Actions

Each message (yours and the AI's) has action buttons. A copy button on every message copies the full message text to clipboard—useful for pasting AI suggestions into your notes or external tools. On AI messages with fenced code blocks, you get formatted block copy: Copy Code Block copies a specific block by index, Copy All Code Blocks copies all code blocks separated by dividers, and Has Code Blocks returns true if the message contains fenced blocks (useful for showing or hiding the copy buttons). These functions are available on each chat message widget in the message list.

Screenshot Placeholder: [Chat message with copy button and formatted block copy button visible on a message containing a formatted block]

The Retry button in the bottom bar resends your last question. Use it when the response was incomplete or cut off, when you got an error and want to try again, or when you changed models and want to see a different response. The Clear chat button clears all messages from the chat UI and starts a fresh conversation visually. Note: For xAI (Grok), clearing the chat preserves the server-side conversation chain—your prior messages still count toward the API's context. To fully clear server history, use Settings > "Clear server history" button.

Conversation Context (Optional)

By default, each query you send is independent—Orion doesn't remember your previous messages. This keeps token costs low but means you need to repeat context for follow-up questions.

Enable conversation memory:

  1. Go to Settings > Interface
  2. Check Use Conversation Context (Experimental)
  3. Set Max Context Messages (default: 10, range: 0-35)

How it works:

  • Orion sends your last N messages with each new query
  • The AI can reference prior conversation turns
  • Multi-turn conversations feel more natural

Example with context enabled:

  • You: "How do I make a health bar?"
  • AI: [explains widget-based health bar implementation]
  • You: "Can you show that with a different approach?"
  • AI: [remembers prior context, provides alternative implementation]

Cost consideration: Conversation context increases token usage (you're sending more text with each query). This increases API fees. Use it when you need multi-turn discussions; disable it for one-off questions.

Screenshot Placeholder: [Settings window showing Interface category with "Use Conversation Context" checkbox checked and "Max Context Messages" slider set to 10]

Streaming Responses

Orion streams AI responses in real time, similar to how ChatGPT or Grok web displays text as it arrives. Instead of waiting for the full response and showing it all at once, you see text progressively appear as the AI generates it. When you send a query, a streaming text area appears in the chat window. Text flows in progressively (token by token, similar to Cursor or Grok web), the streaming area auto-scrolls to show the latest text, and once the full response is received the streaming area disappears and the final formatted message (with rich text, code blocks, etc.) appears in the chat history.

The StreamingUpdatesPerSecond property controls how often the display updates (default: 20 FPS, range 1–60). Lower values give smoother but slower visual updates; higher values give faster but more UI updates. It's adjustable in the Blueprint widget. All four providers (xAI/Grok, OpenAI, Claude, Gemini) support SSE streaming, and parsing is provider-aware since each API has a different SSE format. Streaming is pre-configured in the default Orion chat window—no additional setup is required.

One-Click Output Log

Quickly send Output Log errors to Orion for AI-powered diagnosis. Click the Output Log button (or call Attach Output Log to Input) and the last 20 lines of the Output Log are appended to your input box. Add your question (e.g., "What's causing this error?") and click Send. Or call Send with Output Log to attach the log context and send immediately without typing a question—Orion receives the log lines and provides an error explanation. Use this for Blueprint compilation errors, runtime crashes in PIE, shader compilation warnings, asset loading failures, or any error or warning you want the AI to explain. The log capture count is configurable in Settings > Interface > Last Log Lines Count (default: 20, range: 1–200).

Screenshot Placeholder: [Chat window showing Output Log button next to the input box, and a response explaining a Blueprint compilation error]

QuickPrompts

Instead of typing your question from scratch every time, QuickPrompts give you one-click access to 24 pre-built prompts organized by asset type. These are curated starting points that help you ask the right questions — and some even auto-attach project context so the AI has real data to work with.

How to Use QuickPrompts

  1. Click the QuickPrompts button (lightning bolt icon) at the bottom of the chat window, next to the input box
  2. A popup menu appears showing prompts grouped by category (Blueprint, Material, DataTable, etc.)
  3. Click any prompt — it auto-fills your input box with the full prompt text
  4. Edit if needed — You can modify the prompt before sending
  5. Click Send — The prompt is sent to the AI like any other message

Screenshot Placeholder: [QuickPrompts popup menu open, showing grouped sections (Blueprint, Material, WidgetBlueprint, etc.) with individual prompt entries like "Summarize This Blueprint" and "Optimize Blueprint Logic"]

Available Groups

GroupPromptsWhat They Cover
Blueprint3Summarize, explain selected nodes, optimize logic
Material3Explain material, optimize shader cost, create parameters
DataTable2Describe structure, validate entries
WidgetBlueprint3Review layout, add accessibility, explain bindings
Mesh2Analyze complexity, LOD recommendations
Advanced (Experimental)9Project health check, error analysis, debugging strategy, unit tests, Blueprint logic review, comparison, refactoring, documentation
Custom2Free-form question, beginner-friendly explanations

Context-Aware Prompts

Some prompts in the Advanced (Experimental) group automatically attach real project data:

  • Project Health Check — Attaches your indexed asset summary and plugin list so the AI analyzes your actual project, not a generic one
  • Explain This Error — Attaches recent Output Log lines so the AI sees the actual error
  • Debugging Strategy — Attaches your Output Log and Content Browser selection for targeted debugging advice

These context-aware prompts require the Project Indexing feature to be enabled (Settings > Project Indexing > Enable) and will provide better results after running Index Now.

Tip: QuickPrompts use placeholders like {ProjectName} and {EngineVersion} that are automatically replaced with your project's actual values before the prompt is sent.

Adding Your Own QuickPrompts

You can create additional QuickPrompt libraries:

  1. Content Browser > Right-click > Miscellaneous > Data Asset
  2. Select OrionQuickPromptLibrary as the class
  3. Open the new asset and add entries with Name, Prompt Text, Description, and Asset Type Filter
  4. Save — Orion automatically discovers all QuickPrompt libraries at startup

Screenshot Placeholder: [Data Asset editor showing a custom QuickPromptLibrary with fields: TemplateName, PromptText, Description, AssetTypeFilter, ContextTags]