Skip to main content

Settings and Configuration

Opening Settings

Access Orion settings in three ways:

  1. Project Settings: Edit > Project Settings > Plugins > Azimuth > Azimuth Orion (most comprehensive)
  2. Chat window gear icon: Click the settings gear in the top-right of the Orion chat window
  3. Toolbar shortcut: (if implemented) Main editor toolbar Orion icon right-click menu

Screenshot Placeholder: [Project Settings window open with left sidebar showing Plugins > Azimuth expanded, "Azimuth Orion" selected and highlighted]

AI Configuration Category

This category controls which AI model you're using and basic request settings.

Screenshot Placeholder: [Settings panel showing AI Configuration category expanded with all settings visible]

Model Selection

Note: Not all models may be available for your API key. Access depends on the restrictions and usage tier associated with your key. If a model fails with a rate limit or access error, try another model or check your provider's dashboard.

Model ID dropdown:

  • Shows all models from enabled providers
  • Disabled providers' models are hidden
  • Models in the "Disabled Models" list are hidden

Screenshot Placeholder: [Model ID dropdown expanded showing available models: grok-code-fast-1, grok-4, gpt-5.2, gpt-5-mini, Claude Sonnet 4.6, etc. Tooltip visible on "gpt-5.2" showing: "Context: 400k tokens, Max Output: 128k, Vision: Yes, Cost: High"]

Request Settings

Max Output Tokens:

  • Caps the length of AI responses
  • Set to 0 for API default (recommended for most users)
  • Lower values (e.g., 2000) give shorter responses and cost less
  • Higher values (e.g., 16000) allow longer, more detailed responses

Request Timeout (Seconds):

  • How long to wait before canceling a request
  • Set to 0 for model-based default (60-90s depending on model)
  • Increase if you frequently get timeout errors on large Blueprint analysis

Temperature Control

Deep Scan Temperature: (0.0–1.0, default: 0.2)

  • Controls response randomness in Deep Scan mode
  • Lower values produce more precise, deterministic code suggestions
  • Use lower (0.0–0.3) for consistent, repeatable output; increase slightly if suggestions feel too rigid

Orbital Temperature: (0.0–2.0, default: 0.7)

  • Controls response randomness in Orbital (Ask) mode
  • Higher values produce more creative, varied responses
  • Use lower (0.0–0.5) for factual answers; use higher (0.8–2.0) for brainstorming or creative exploration

Other Settings

Persisted Response ID (Read-only):

  • Tracks the current xAI conversation chain (managed automatically)
  • You don't need to edit this
  • To clear: Use "Clear server history" button

Custom Endpoint (OpenAI-Compatible APIs)

Orion supports connecting to any OpenAI-compatible API endpoint. These settings are in the AI Configuration category, below the standard model settings. The URL, Model ID, and Skip Auth fields collapse (hidden entirely) until you enable the "Use Custom Endpoint" toggle.

Security requirements:

  • HTTPS only — HTTP URLs are rejected (SSRF prevention)
  • No localhost or private IPs — localhost, 127.0.0.1, and private IP ranges (10.x, 192.168.x, 172.16–31.x) are blocked
  • For local models (Ollama, LM Studio): use an HTTPS reverse proxy or tunnel (e.g., ngrok, cloudflared) to expose your local server over HTTPS

How to set up:

  1. Go to Settings > AI Configuration
  2. Check Use Custom Endpoint — the sub-fields appear
  3. Enter the Custom Endpoint URL — must be HTTPS (e.g., https://your-proxy.example.com/v1)
  4. Enter the Custom Model ID — the model name your server uses:
    • Examples: llama3, mistral, codestral, deepseek-coder
    • This model appears in the Model dropdown with a "(Custom)" suffix
  5. (Optional) Check Skip Auth for Custom Endpoint if your server doesn't require an API key

Using your custom model:

  • Once configured, select your custom model from the Model dropdown at the top of AI Configuration (or in the EUW model selector)
  • You can freely switch between your custom model and standard cloud models
  • Standard cloud models continue to work normally alongside the custom endpoint

Important notes:

  • Your custom server must be OpenAI API-compatible (accepting /chat/completions requests)
  • Trailing slashes in the URL are stripped automatically
  • The default timeout for custom models is 120 seconds
  • Not all Orion features may be available with custom models, and due to the wide variances in them (hundreds exist) these are not things we can support - use at your own risk.

Screenshot Placeholder: [AI Configuration settings showing "Custom Endpoint" section with "Use Custom Endpoint" checked, URL field showing "https://your-proxy.example.com/v1", Custom Model ID field showing "llama3", and "Skip Auth" checked]

AI Providers Category

Control which AI providers are available and manage API keys. Each provider is separated by a thin horizontal line, and sub-rows (API Key, Expiration, Models) collapse when the provider is disabled — they disappear entirely rather than being dimmed.

Screenshot Placeholder: [Settings showing AI Providers category with all 4 provider subsections visible: xAI, OpenAI, Anthropic, Google, separated by horizontal dividers]

Enable/Disable Providers

Each provider has a bold header with an enable/disable checkbox:

  • Enable xAI (Grok) — OFF by default
  • Enable OpenAI — OFF by default
  • Enable Anthropic (Claude) — OFF by default
  • Enable Google (Gemini) — OFF by default

All providers are disabled by default. Enable the ones you have API keys for.

Why disable providers?

  • You only have an API key for one provider
  • You want to limit model choices to reduce decision fatigue
  • You're enforcing cost controls (disable expensive providers)

When you disable a provider, all its models disappear from the Model ID dropdown, and the provider's Key / Expiration / Models rows collapse.

Per-Provider Settings

Each provider (xAI, OpenAI, Anthropic, Google) has its own subsection (visible only when the provider is enabled):

API Key Management:

  • Set API Key button: Opens secure input dialog (or inline text field)
  • Validate button: Tests the key with a quick API call
    • Green checkmark = valid key
    • Red X = invalid or expired key
  • Delete button: Removes the key from encrypted storage (secure wipe)

Key Expiry:

  • [Provider] Key Expiry (Days): -1 to 90 days (30 = default, -1 = unlimited)
  • Default: 30 days
  • Timer starts from last validation
  • After expiry, Orion warns you to re-validate

Screenshot Placeholder: [xAI subsection showing API key field, Validate button with green checkmark icon, Delete button, and "xAI Key Expiry (Days)" slider set to 30]

Disabled Models

Disabled Models array:

  • List of model IDs to hide from dropdown (even if their provider is enabled)
  • Example: "o4-mini", "claude-opus-4-6"
  • Use case: "I have OpenAI enabled but I don't want expensive o4-mini available—only show gpt-5.2 and gpt-5-mini"

How to use:

  1. Click the "+" button to add an entry
  2. Type the model ID exactly (e.g., "o4-mini", "claude-opus-4-6")
  3. That model disappears from the Model ID dropdown

Screenshot Placeholder: [Disabled Models array showing entries: "o4-mini", "claude-opus-4-6" with + and - buttons visible]

Interface Category

Control how Orion displays responses and what context it includes with queries.

The Interface category is organized into sub-sections with bold headers. Properties are indented under their sub-section, and collapsed child items are further indented for visual clarity. Dependent sub-settings collapse (disappear entirely) when their parent toggle is off — they are not dimmed.

Screenshot Placeholder: [Settings showing Interface category expanded]

Orion Chat

The Orion Chat sub-section appears first and contains the most commonly adjusted settings for the chat window.

Response Display Mode:

  • Rich Text (default): Formatted responses with bold, headers, code block styling
  • Plain Text: Unformatted text (easier to copy, no styling)

When to use Plain Text:

  • You're copying large code blocks frequently (no formatting artifacts)
  • You prefer minimal styling
  • Rich text rendering has issues (rare)

Add Spacing Between Response Lines:

  • Adds extra padding between lines in AI responses
  • Makes long responses easier to read
  • Recommended: ON (default)

Screenshot Placeholders:

  • [Side-by-side comparison: Left side shows Rich Text response with bold headers, inline monospace in gray boxes, and code blocks. Right side shows same response in Plain Text with no formatting]

Include Last Log Lines in Query:

  • Prepends recent Output Log lines to each query
  • Essential for error troubleshooting
  • Tooltip: "Useful for situationally debugging Output Log Errors. Recommend to use only when needed."
  • When enabled, the following sub-settings appear (indented):
    • Last Log Lines Count: 1-200 lines (default: 20) — hidden when parent toggle is off
    • Send Full Log: Sends entire captured buffer instead of last N lines — hidden when parent toggle is off

When to enable log lines:

  • You have a compilation error and want AI to diagnose it
  • You have a runtime error/crash and need help
  • You're seeing warnings and want explanations

Example with log lines enabled:

Your Output Log shows:

Error: Blueprint BP_PlayerController (function UpdateHUD): Variable 'HealthWidget' used but not defined
Warning: Accessed None trying to read property 'HealthWidget'

You ask: "What's wrong?"

AI receives the log lines + your question, responds: "Based on your Output Log, you're trying to access a variable 'HealthWidget' that doesn't exist in the Blueprint. You need to add a variable named 'HealthWidget' (type: User Widget object reference) to BP_PlayerController."

Screenshot Placeholder: [Settings showing Include Last Log Lines in Query checkbox checked, Last Log Lines Count and Send Full Log visible below it (indented)]

Enable Custom QuickPrompts:

  • Load additional QuickPrompt libraries from a custom folder
  • When enabled, the following sub-setting appears (indented):
    • Custom QuickPrompts Path — Folder picker (text box + "..." browse button) for selecting a folder containing UOrionQuickPromptLibrary assets. The browse button opens an OS directory dialog defaulting to your project's Content folder. — hidden when toggle is off

Show Usage Tabulation (Experimental):

  • Displays token usage in chat window (estimated + actual from API)
  • Shows: Last request tokens, session total

Use Prompt Repetition (Experimental):

  • When enabled, Orion automatically repeats your prompt to the AI model before sending it, which has been shown to improve response quality for non-reasoning tasks.
  • Orbital (Ask) mode only — does not apply to Deep Scan mode.
  • Roughly doubles the input tokens per query (increases API cost). Output tokens and response time are unaffected.
  • Default: OFF.

When enabled, the following sub-settings appear (indented):

  • Prompt Repetition Token Threshold — If your message exceeds this estimated token count (default: 8000), repetition is automatically skipped to avoid context overflow. Range: 500–32000. Hidden when toggle is off.
  • Description text — An explanatory note is displayed: "Best used with Non-Reasoning models, this enables each prompt token to attend to every other prompt token. When not using reasoning, prompt repetition improves the performance of LLMs without increasing the lengths of the generated outputs or latency."Google Research, arXiv 2512.14982

Visual indicator: When Prompt Repetition is enabled, a visual indicator is displayed in the chat window to remind you the feature is active. The indicator's visibility is driven by Is Prompt Repetition Enabled on the chat widget.

Note: Project name and engine version are always included in every query's system message automatically. There is no toggle for this; it is always active.

Integration

These toggles live under the Integration bold header (indented in the UI):

Enable Right-Click Menu:

  • Shows/hides "Azimuth Orion" submenu in Content Browser
  • Default: ON
  • Turn OFF if you rarely use right-click analysis (declutter context menu)

Graph Analysis Uses Deep Scan Mode:

  • When analyzing from Blueprint toolbar, use Deep Scan mode and show toast
  • Default: ON

Notify on Packaging Failure:

  • When project packaging fails, a notification appears with the error count and a clickable "Open in Orion" link
  • Clicking the link opens the Orion chat and automatically submits the packaging errors for AI analysis
  • Default: ON
  • Turn OFF if you prefer not to receive packaging-failure notifications

Mode Context

Deep Scan Mode Context and Orbital (Ask) Mode Context — custom system message prompts for each mode (multi-line text fields).

Logging

Log Format, Log Directory, and Auto-export Chat Log — chat export settings.

Blueprint Context Category

Controls how Blueprint analysis works.

Screenshot Placeholder: [Settings showing Blueprint Context category]

Compile On Send:

  • Enabled by default
  • Compiles Blueprint before sending to AI
  • Includes compiler results (errors/warnings) and defined variables in the context
  • Why it's useful: AI can diagnose missing variables, compilation errors
  • Disable if: You don't want automatic compilation (prefer manual control)

Context Send Display Message:

  • Template for what you see in chat when sending Blueprint context
  • Default: "Blueprint {BlueprintName} is being analyzed..."
  • Placeholders: {BlueprintName}, {GraphCount}, {NodeCount}, {PayloadLabel}
  • Customize if you want different wording

Context Send API Prompt:

  • Template for the actual prompt sent to the AI
  • Default: "Blueprint context for {BlueprintName}. Please acknowledge that you received it and ask what I would like to know about it."
  • Advanced users can customize to change AI behavior

EDA Integration (Experimental) Category

Export Blueprint context to Epic Developer Assistant (EDA).

Note: This feature is experimental. The EDA API is not yet available, so this integration uses clipboard copy and local JSON export to facilitate the transfer of data between Orion and external AI tools.

Enable EDA Integration:

  • Default: ON
  • Shows "EDA" destination option in Blueprint toolbar Send dropdown
  • Allows exporting Blueprint context as JSON
  • When enabled, the following sub-settings appear (indented):
    • EDA Context Export Path — Custom path for EDA JSON exports (empty = default)
    • Copy to Clipboard on EDA Export — Also copies JSON to clipboard when exporting

EDA Context Export Path:

  • Where JSON files are saved
  • Default: Saved/AzimuthOrion/ContextExports/
  • Customize if you want a different location

Copy to Clipboard on EDA Export:

  • Default: ON
  • When exporting to EDA, also copies JSON to clipboard
  • Useful: Paste directly into EDA without opening the file

Screenshot Placeholder: [Settings showing EDA Integration category with Enable EDA Integration checked, export path field showing default, and clipboard copy checkbox checked]

Project Indexing Category

Note: Full project indexing is a planned feature. Currently available:

Use Cached Results When Analyzing Blueprints:

  • When enabled, uses cached Blueprint summaries if available
  • Faster analysis (no re-parsing if Blueprint hasn't changed)
  • Currently experimental