Skip to main content

Tips and Best Practices

Choosing the Right Model

For cost-conscious users:

  • grok-code-fast-1 — Fastest, lowest cost, surprisingly good quality
  • GPT-4o mini — Affordable, decent quality
  • Gemini Flash 1.5 — Fast, low cost, huge context window

For best quality:

  • GPT-4o — Best vision support, excellent reasoning
  • Claude Opus 4 — Top-tier quality, 200k context
  • Grok 4 — Latest from xAI, strong UE5 knowledge

For vision (screenshots):

  • GPT-4o — Industry standard, reliable
  • Grok Vision — Good balance of speed and quality
  • Claude 3.5 Sonnet — Excellent vision + reasoning
  • Gemini Pro Vision — Solid option, low cost

For huge Blueprints/context:

  • Claude Opus 4 — 200k tokens (fits massive Blueprints)
  • Gemini Pro 1.5 — 1M+ tokens (can handle entire projects)

Asking Effective Questions

Good questions are specific and action-oriented:

Good:

  • "How do I spawn an actor at the player's location in C++?"
  • "Analyze this Blueprint for unused variables"
  • "What's the performance impact of using Tick vs. Timer?"
  • "Generate a health component with TakeDamage and Heal functions"

Less effective:

  • "Help" (too vague—help with what?)
  • "Fix my code" (no context—what code? what's wrong?)
  • "Why doesn't it work?" (need error message, logs, or Blueprint context)
  • "Make it better" (better how? performance? readability?)

Rule of thumb: If you were asking a colleague, would they understand the question without seeing your screen? If not, add context (screenshot, Blueprint context, log lines).

Providing Context

The more context you provide, the better the AI can help:

For compilation errors:

  • Enable "Include Last Log Lines in Query"
  • Settings > Interface > Include Last Log Lines in Query (check ON)
  • Set Last Log Lines Count to 20-50
  • Ask: "What's causing this error?"

For visual issues:

  • Capture a screenshot (Blueprint toolbar Screenshot button)
  • Use a vision-capable model (GPT-4o, Grok Vision, Claude 3.5+)
  • Ask: "What's wrong with this node layout?"

For logic errors:

  • Send Blueprint context (toolbar or right-click)
  • Switch to Deep Scan mode
  • Ask: "Why doesn't this event fire?"

For asset questions:

  • Right-click the asset > Describe to Orion
  • Ask follow-up questions with context

Managing Token Costs

Tokens = API fees. Here's how to keep costs low:

Use fast/cheap models for simple questions:

  • grok-code-fast-1 costs ~1/10th of GPT-4o
  • GPT-4o mini costs ~1/5th of GPT-4o
  • Save expensive models for complex analysis

Disable conversation context when not needed:

  • Settings > Interface > Use Conversation Context (uncheck)
  • Each query becomes independent (no history sent)
  • Big savings if you don't need multi-turn discussions

Use "Graph Flow" instead of "Full Graph":

  • Graph Flow analyzes only execution nodes (smaller payload)
  • Full Graph includes all data links (larger payload)
  • Graph Flow can be 30-50% fewer tokens

Select specific nodes for large Blueprints:

  • Don't send 200-node Blueprint for a 10-node question
  • Select the relevant section, Send > Selection
  • Can reduce payload by 80-90%

Clear server history periodically:

  • xAI conversation history accumulates over time
  • Settings > "Clear server history" button
  • Prevents old context from inflating future queries

Set Max Output Tokens:

  • Settings > AI Configuration > Max Output Tokens
  • Limit response length if you don't need essays
  • Example: Set to 2000 for concise answers (default 0 = unlimited)

Security Best Practices

API key safety:

  • Never share your API keys with others
  • Never commit keys to version control (Orion stores them in user profile, not project files)
  • Set key expiry (30 days recommended) so old keys auto-expire
  • Validate keys regularly (Settings > Validate button)

Query review before sending:

  • Orion never sends anything without your explicit action (clicking Send, right-click option, toolbar button)
  • Review what you're sending, especially if your project has proprietary logic
  • Use "Describe to Orion" for metadata-only analysis (no full code/graph sent)

Context injection settings:

  • "Include Last Log Lines" — only enable when troubleshooting (logs may contain sensitive data)
  • Project name and engine version are always included to help the AI provide relevant answers

API provider privacy policies:

  • Review xAI, OpenAI, Anthropic, Google privacy policies
  • Understand what data is retained (most providers retain queries for 30 days)
  • For maximum privacy: Use EDA export (local AI) instead of cloud APIs