Skip to main content

Tips and Best Practices

Choosing the Right Model

For cost-conscious users: The grok-4-1-fast models (reasoning and non-reasoning) are xAI's lowest-cost options and offer a 2M token context window—ideal for large Blueprints. grok-code-fast-1 and grok-4 round out the xAI lineup with strong UE5 knowledge. gpt-5-mini and gemini-2.5-flash are affordable alternatives from other providers; Gemini Flash adds a 1M context window.

For best quality: gpt-5.2, claude-opus-4-6, and grok-4 deliver top-tier reasoning and analysis. All support vision for screenshot debugging.

For vision (screenshots): Multiple models support image input. gpt-5-mini is a cost-effective vision option. grok-4, gpt-5.2, claude-sonnet-4-5, and gemini-2.5-flash are all solid choices depending on your provider and budget.

For huge Blueprints/context: The grok-4-1-fast models have a 2M token context window—the largest in Orion. grok-code-fast-1 has a larger context than most Claude models. gemini-2.5-pro offers 1M tokens for providers that support it.

Asking Effective Questions

The better your question, the better the answer. Vague prompts ("Help", "Fix my Blueprint") give the AI little to work with. Specific, action-oriented questions yield practical answers. Here are some examples:

Good:

  • "How do I spawn an actor at the player's location in Blueprint?"
  • "Analyze this Blueprint for unused variables"
  • "What's the performance impact of using Tick vs. Timer?"
  • "Generate Blueprint logic for a health component with TakeDamage and Heal"

Less effective:

  • "Help" (too vague—help with what?)
  • "How do I fix my Blueprint?" (no context—which Blueprint? what's wrong?)
  • "Why doesn't it work?" (need error message, logs, or Blueprint context)
  • "Make it better" (better how? performance? readability?)

Rule of thumb: If you were asking a colleague, would they understand the question without seeing your screen? If not, add context (screenshot, Blueprint context, log lines).

Providing Context

Orion can include different kinds of context depending on the problem. Match what you send to what you're debugging.

For compilation errors: Include the log. Enable "Include Last Log Lines in Query" (Settings > Interface), set Last Log Lines Count to 20–50, then ask "What's causing this error?" The AI will see the same output you see in the Output Log.

For visual issues: A screenshot is worth a thousand words. Use the Blueprint toolbar Screenshot button, pick a vision-capable model, and ask "What's wrong with this node layout?"

For logic errors: Send Blueprint context (toolbar or right-click), switch to Deep Scan mode, and ask "Why doesn't this event fire?" The AI needs the graph structure to trace execution.

For asset questions: Right-click the asset > Describe to Orion, then ask follow-up questions. The metadata gives the AI enough to answer without exposing full asset contents.

Managing Token Costs

Token usage drives API costs. A few habits can keep bills low without sacrificing capability.

Model choice has the biggest impact. Use grok-4-1-fast models for routine questions—they're the lowest-cost Grok options and have a 2M context window. grok-code-fast-1 and gpt-5-mini are also cost-effective. Reserve flagship models (gpt-5.2, claude-opus-4-6) for complex analysis where the extra capability pays off.

Conversation context sends prior messages with each query. Turn it off (Settings > Interface > Use Conversation Context) when you're doing one-off questions; each query becomes independent and you avoid paying for history you don't need.

Graph payload size matters. "Graph Flow" analyzes only execution flow; "Full Graph" includes data links. Flow can be 30–50% fewer tokens. For large Blueprints, select the relevant nodes and use Send > Selection instead of the full graph—often 80–90% fewer tokens for a 10-node question in a 200-node Blueprint.

xAI users: Conversation history accumulates on xAI's side. Use Settings > "Clear server history" periodically so old context doesn't inflate future query sizes.

Max Output Tokens (Settings > AI Configuration) caps response length. Set to 2000 or 4000 when you want concise answers instead of essays; default 0 means unlimited.

Security Best Practices

Orion is designed so you stay in control of what gets sent. Nothing leaves your machine without an explicit action—clicking Send, choosing a right-click option, or using a toolbar button. Before anything is transmitted, you decide what context to include.

API key safety matters because your keys unlock your provider accounts. Orion stores keys in your user profile (not in project files), so they never end up in version control. Still, treat them like passwords: don't share them, set expiry (30 days is a good default), and validate them periodically via Settings. If a key leaks, revoke it at your provider's console and generate a new one.

Knowing what gets sent helps you make informed choices. Graph sends include your Blueprint structure (nodes, connections, names)—useful for analysis but worth reviewing if your logic is proprietary. "Describe to Orion" on assets sends metadata only (column names, types, counts), not full content. The "Include Last Log Lines" setting adds Output Log output to queries; enable it when troubleshooting, but be aware that logs can contain paths, variable values, or other project details. Project name and engine version are always included so the AI can give relevant UE5-specific answers.

Provider policies vary. Most retain queries for a period (often 30 days) for abuse prevention and model improvement. Review xAI, OpenAI, Anthropic, and Google privacy policies to understand retention, logging, and how your data is handled.