Using the Chat Window
Opening Orion
Three ways to open the Orion chat window:
- Window menu: Window > Azimuth Orion (most common)
- Toolbar button: Look for the Orion icon in the main editor toolbar (if configured)
- From Blueprint editor: Orion toolbar sends will auto-open the window
Screenshot Placeholder: [Main editor toolbar showing Orion icon button location]
The chat window is dockable—drag the tab to any position in your editor layout, or float it as a separate window.
Chat Interface Tour
The Orion chat window has five main areas:
flowchart TB
subgraph ChatWindow [Orion Chat Window]
Toolbar["Top Toolbar: Mode dropdown | Settings gear"]
MessageList["Message List: Conversation history<br/>(user messages + AI responses)"]
InputBox["Input Box: Type your question here"]
SendButton["Send Button: Submit query<br/>(becomes Stop while processing)"]
BottomBar["Bottom Bar: Retry | Clear chat"]
end
Screenshot Placeholder: [Annotated screenshot of Orion chat window with arrows/labels pointing to: (1) Mode dropdown in top-left, (2) Settings gear in top-right, (3) Message list in center showing sample conversation, (4) Input box at bottom, (5) Send button next to input, (6) Retry and Clear buttons]
Asking Questions
Using Orion is simple:
- Type your question in the input box at the bottom
- Click Send (or press Enter)
- Wait for the response — The Send button changes to "Stop" while processing
- Read the AI response — Appears in the message list with formatting (bold, code blocks, headers)
Example questions you can ask:
- "How do I create a TArray of actors in C++?"
- "Explain how Blueprint interfaces work"
- "What's the difference between Tick and Timer?"
- "Show me how to implement a health system"
- "Best practices for multiplayer actor replication"
Screenshot Placeholder: [Chat window showing a sample question "How do I create a TArray of actors in C++?" followed by an AI response with formatted C++ code block showing TArray<AActor*> syntax]
While processing:
- The Send button changes to Stop
- You can click Stop to cancel the request
- A loading indicator may appear
Screenshot Placeholder: [Chat window with Stop button visible (replaces Send) and loading indicator while query is in progress]
Modes: Orbital, C++ Mode, Deep Scan
Orion has three query modes that adjust how the AI responds to your questions. Think of them as different "lenses" the AI uses.
Mode dropdown location: Top-left corner of the chat window
Screenshot Placeholder: [Mode dropdown expanded, showing three options: "Orbital (Ask)", "C++ Mode", "Deep Scan (Blueprint)" with radio buttons]
Orbital (Ask Mode) — Default
Best for:
- General questions about Unreal Engine
- Learning new concepts
- Exploring broad topics
- When you're not sure where to start
AI behavior: Broad UE5 knowledge, accessible language, less technical jargon
Example questions:
- "How do I make a pickup item?"
- "Explain the difference between Actor and Pawn"
- "Best practices for inventory systems"
- "How does level streaming work?"
C++ Mode
Best for:
- Code generation
- C++ syntax questions
- API reference lookups
- When you need actual code snippets
AI behavior: C++ focused, uses UCLASS/UFUNCTION/UPROPERTY syntax, references official UE5 documentation
Example questions:
- "Generate a C++ class for a health component"
- "How do I use TObjectPtr instead of raw pointers?"
- "What's the correct way to bind a delegate in C++?"
- "Show me how to implement IInterface in C++"
Deep Scan (Blueprint Mode)
Best for:
- Technical Blueprint analysis
- Debugging Blueprint logic
- Optimization suggestions
- Understanding complex node graphs
AI behavior: Highly technical, node-level detail, Blueprint/C++ interop knowledge
Example questions:
- "Analyze this graph for performance issues"
- "Find unused variables"
- "Suggest Blueprint refactoring"
- "Why isn't this event firing?"
Note: Deep Scan mode is automatically selected when you use the Blueprint toolbar or right-click menu analysis features.
Switching modes:
- Click the mode dropdown anytime
- Change takes effect on your next query (doesn't change previous messages)
- No need to restart the editor or clear chat
Message Actions
Each message (yours and the AI's) has action buttons:
Copy button (on every message):
- Copies the full message text to clipboard
- Useful for pasting AI code suggestions into your project
- Click the copy icon on any message
Code Block Copy (on AI messages with code):
- Copy Code Block — Copies a specific code block from the message (by index)
- Copy All Code Blocks — Copies all code blocks from the message, separated by dividers
- Has Code Blocks — Can be used to show/hide the copy code buttons (returns true if the message contains ``` fenced code)
- These functions are available on each chat message widget in the message list
Screenshot Placeholder: [Chat message with copy button and code copy button visible on a message containing a code block]
Retry button (bottom bar):
- Resends your last question
- Useful if:
- Response was incomplete or cut off
- You got an error and want to try again
- You changed models and want to see a different response
- Click Retry in the bottom toolbar
Clear chat button (bottom bar):
- Clears all messages from the chat UI
- Starts a fresh conversation visually
- Note: For xAI (Grok), this preserves the server-side conversation chain (your prior messages still count toward the API's context)
- To fully clear server history, use Settings > "Clear server history" button
Conversation Context (Optional)
By default, each query you send is independent—Orion doesn't remember your previous messages. This keeps token costs low but means you need to repeat context for follow-up questions.
Enable conversation memory:
- Go to Settings > Interface
- Check Use Conversation Context (Experimental)
- Set Max Context Messages (default: 10, range: 0-35)
How it works:
- Orion sends your last N messages with each new query
- The AI can reference prior conversation turns
- Multi-turn conversations feel more natural
Example with context enabled:
- You: "How do I make a health bar?"
- AI: [explains widget-based health bar implementation]
- You: "Can you show that in C++ instead?"
- AI: [remembers prior context, provides C++ health component version]
Cost consideration: Conversation context increases token usage (you're sending more text with each query). This increases API fees. Use it when you need multi-turn discussions; disable it for one-off questions.
Screenshot Placeholder: [Settings window showing Interface category with "Use Conversation Context" checkbox checked and "Max Context Messages" slider set to 10]
Streaming Responses
Orion streams AI responses in real-time, similar to how ChatGPT or Grok web displays text as it arrives. Instead of waiting for the full response and showing it all at once, you see text progressively appear as the AI generates it.
How it works:
- When you send a query, a streaming text area appears in the chat window
- Text flows in progressively as the AI generates it (token by token, similar to Cursor or Grok web)
- The streaming area auto-scrolls to show the latest text
- Once the full response is received, the streaming area disappears and the final formatted message (with rich text, code blocks, etc.) appears in the chat history
Flow rate control:
- The
StreamingUpdatesPerSecondproperty controls how often the display updates (default: 20 FPS) - Range: 1–60 updates per second
- Lower values = smoother but slower visual updates; higher values = faster but more UI updates
- Adjustable in the Blueprint widget (BP-editable property on
UOrionChatWidget)
Provider support:
- All four providers (xAI/Grok, OpenAI, Claude, Gemini) support SSE streaming
- Parsing is provider-aware (each API has a different SSE format)
Setup note: Streaming is pre-configured in the default Orion chat window. No additional setup is required.
One-Click Output Log
Quickly send Output Log errors to Orion for AI-powered diagnosis:
Attach to Input:
- Click the Output Log button (or call
Attach Output Log to Input) - The last 20 lines of the Output Log are appended to your input box
- Add your question (e.g., "What's causing this error?") and click Send
Send Immediately:
- Call
Send with Output Logto attach the log context and send immediately without typing a question - Orion receives the log lines and provides error explanation
When to use:
- Blueprint compilation errors
- Runtime crashes in PIE
- Shader compilation warnings
- Asset loading failures
- Any error/warning you want the AI to explain
Tip: The log capture count is configurable in Settings > Interface > Last Log Lines Count (default: 20, range: 1-200).
Screenshot Placeholder: [Chat window showing Output Log button next to the input box, and a response explaining a Blueprint compilation error]
QuickPrompts
Instead of typing your question from scratch every time, QuickPrompts give you one-click access to 24 pre-built prompts organized by asset type. These are curated starting points that help you ask the right questions — and some even auto-attach project context so the AI has real data to work with.
How to Use QuickPrompts
- Click the QuickPrompts button (lightning bolt icon) at the bottom of the chat window, next to the input box
- A popup menu appears showing prompts grouped by category (Blueprint, Material, DataTable, etc.)
- Click any prompt — it auto-fills your input box with the full prompt text
- Edit if needed — You can modify the prompt before sending
- Click Send — The prompt is sent to the AI like any other message
Screenshot Placeholder: [QuickPrompts popup menu open, showing grouped sections (Blueprint, Material, WidgetBlueprint, etc.) with individual prompt entries like "Summarize This Blueprint" and "Optimize Blueprint Logic"]
Available Groups
| Group | Prompts | What They Cover |
|---|---|---|
| Blueprint | 3 | Summarize, explain selected nodes, optimize logic |
| Material | 3 | Explain material, optimize shader cost, create parameters |
| DataTable | 2 | Describe structure, validate entries |
| WidgetBlueprint | 3 | Review layout, add accessibility, explain bindings |
| Mesh | 2 | Analyze complexity, LOD recommendations |
| Advanced (Experimental) | 9 | Project health check, error analysis, debugging strategy, C++ conversion, unit tests, code review, comparison, refactoring, documentation |
| Custom | 2 | Free-form question, beginner-friendly explanations |
Context-Aware Prompts
Some prompts in the Advanced (Experimental) group automatically attach real project data:
- Project Health Check — Attaches your indexed asset summary and plugin list so the AI analyzes your actual project, not a generic one
- Explain This Error — Attaches recent Output Log lines so the AI sees the actual error
- Debugging Strategy — Attaches your Output Log and Content Browser selection for targeted debugging advice
These context-aware prompts require the Project Indexing feature to be enabled (Settings > Project Indexing > Enable) and will provide better results after running Index Now.
Tip: QuickPrompts use placeholders like {ProjectName} and {EngineVersion} that are automatically replaced with your project's actual values before the prompt is sent.
Adding Your Own QuickPrompts
You can create additional QuickPrompt libraries:
- Content Browser > Right-click > Miscellaneous > Data Asset
- Select OrionQuickPromptLibrary as the class
- Open the new asset and add entries with Name, Prompt Text, Description, and Asset Type Filter
- Save — Orion automatically discovers all QuickPrompt libraries at startup
Screenshot Placeholder: [Data Asset editor showing a custom QuickPromptLibrary with fields: TemplateName, PromptText, Description, AssetTypeFilter, ContextTags]