Troubleshooting
Diagnostic Logging
By default, Orion only prints warnings and errors to the Output Log to keep it clean. When troubleshooting, you can enable verbose logging to see detailed diagnostic output.
Console Commands (type in the Output Log command bar or press ~ to open the console):
| Command | What it does |
|---|---|
Azimuth.Orion.EnableLogging | Enables verbose logging — shows all Orion diagnostic output |
Azimuth.Orion.DisableLogging | Restores default logging (warnings and errors only) |
Azimuth.Orion.Status | Prints current plugin configuration: version, selected model, enabled providers, feature toggles |
When to use:
- If you receive API errors and need to see the full request/response flow
- If a feature isn't behaving as expected and you need to see internal state
- When submitting a support ticket — enable verbose logging, reproduce the issue, then share the Output Log
All commands auto-complete in the UE5 console.
"API request failed (Code: 401)"
Cause: Invalid or expired API key
Solution:
- Open Settings > AI Providers > [Your Provider]
- Check that the API key is entered correctly
- Click Validate button
- If validation fails:
- Go to the provider's website (console.x.ai, platform.openai.com, etc.)
- Check that your API key is still active
- Check that you have available credits/billing set up
- Generate a new API key if needed
- Enter the new key in Orion, validate again
"API request failed (Code: 429)"
Cause: Rate limit exceeded (too many requests too fast)
Solution:
- Wait 30-60 seconds, then try again
- Orion has client-side rate limiting, but providers have their own limits
- Check your provider dashboard for rate limit info
- Consider upgrading to a higher tier plan if you hit limits frequently
"Conversation chain expired on the server"
Cause: The xAI conversation chain expired (typically after extended periods of inactivity)
Solution:
- This is handled automatically—Orion clears the stale conversation and starts fresh
- Just resend your message
- To prevent: Use "Clear server history" button in settings after long breaks
"The currently selected model does not support images"
Cause: You captured a screenshot but your current model doesn't support vision
Solution:
- Open Settings > AI Configuration > Model ID
- Select a vision-capable model (marked [Vision] in the dropdown), for example:
- xAI: grok-4, grok-4-1-fast-reasoning, grok-4-1-fast-non-reasoning
- OpenAI: gpt-5.2, gpt-5-mini, gpt-5.3-codex, o4-mini
- Anthropic: All Claude models (Opus 4.6, Opus 4.5, Sonnet 4.6, Sonnet 4.5, Haiku 4.5)
- Google: All Gemini models (3 Pro, 3 Flash, 3.1 Pro, 2.5 Pro, 2.5 Flash)
- Capture screenshot again
- Send your query
Right-click menu doesn't appear
Cause: "Enable Right-Click Menu" disabled in settings
Solution:
- Settings > Interface
- Check Enable Right-Click Menu (turn it ON)
- Restart editor (may be required)
- Right-click menu should now appear
Response is incomplete or cut off
Cause: Model hit max output token limit mid-response
Solution:
Option 1: Use Retry button
- Click Retry in the chat bottom bar
- AI may continue from where it stopped (works with some models)
Option 2: Increase token limit
- Settings > AI Configuration > Max Output Tokens
- Increase to 8000 or 16000 (or 0 for unlimited)
- Ask your question again
Option 3: Ask more focused questions
- Instead of "Explain everything about Blueprints", ask "Explain Blueprint execution flow"
- Narrow the scope to get complete answers
Chat window won't scroll to bottom
Cause: Manual scroll position is preserved by design (so you can read old messages while new ones arrive)
Solution:
- Scroll to bottom manually — Auto-scroll re-enables when you're at the bottom
- Send a new message — Auto-scroll re-enables on send
- This is intentional behavior (not a bug)
"Failed to compile Blueprint"
Cause: Blueprint has errors, and "Compile On Send" is enabled
Solution:
- This is actually helpful—the AI receives the compiler errors
- Ask: "Why did compilation fail?" or "What are these errors?"
- AI will diagnose based on the error messages
- Fix the errors, send again
If you don't want auto-compilation:
- Settings > Blueprint Context > Compile On Send (uncheck)
"Request timeout"
Cause: The AI provider took longer than the timeout setting to respond
Solution:
- Increase timeout: Settings > AI Configuration > Request Timeout (Seconds)
- Set to 90 or 120 seconds (default is 60-75 depending on model)
- Try again
- If timeouts persist:
- Switch to a faster model (grok-code-fast-1, gpt-5-mini)
- Reduce Blueprint/context size (send Selection instead of Full Graph)
- Check your internet connection