Skip to main content

Settings and Configuration

Opening Settings

Access Orion settings in three ways:

  1. Project Settings: Edit > Project Settings > Plugins > Azimuth > Azimuth Orion (most comprehensive)
  2. Chat window gear icon: Click the settings gear in the top-right of the Orion chat window
  3. Toolbar shortcut: (if implemented) Main editor toolbar Orion icon right-click menu

Screenshot Placeholder: [Project Settings window open with left sidebar showing Plugins > Azimuth expanded, "Azimuth Orion" selected and highlighted]

AI Configuration Category

This category controls which AI model you're using and basic request settings.

Screenshot Placeholder: [Settings panel showing AI Configuration category expanded with all settings visible]

Model Selection

Model ID dropdown:

  • Shows all models from enabled providers
  • Disabled providers' models are hidden
  • Models in the "Disabled Models" list are hidden

Screenshot Placeholder: [Model ID dropdown expanded showing available models: grok-code-fast-1, grok-3, grok-4, GPT-4o, GPT-4o mini, Claude Sonnet 4.5, etc. Tooltip visible on "GPT-4o" showing: "Context: 128k tokens, Max Output: 16k, Vision: Yes, Cost: High"]

Request Settings

Max Output Tokens:

  • Caps the length of AI responses
  • Set to 0 for API default (recommended for most users)
  • Lower values (e.g., 2000) give shorter responses and cost less
  • Higher values (e.g., 16000) allow longer, more detailed responses

Request Timeout (Seconds):

  • How long to wait before canceling a request
  • Set to 0 for model-based default (60-90s depending on model)
  • Increase if you frequently get timeout errors on large Blueprint analysis

Other Settings

Persisted Response ID (Read-only):

  • Tracks the current xAI conversation chain (managed automatically)
  • You don't need to edit this
  • To clear: Use "Clear server history" button

Custom Endpoint (Ollama / LM Studio / Local Models)

Orion supports connecting to any OpenAI-compatible API endpoint. These settings are in the AI Configuration category, below the standard model settings. The URL, Model ID, and Skip Auth fields collapse (hidden entirely) until you enable the "Use Custom Endpoint" toggle.

How to set up:

  1. Go to Settings > AI Configuration
  2. Check Use Custom Endpoint — the sub-fields appear
  3. Enter the Custom Endpoint URL — the base URL of your local server:
    • Ollama: http://localhost:11434/v1
    • LM Studio: http://localhost:1234/v1
    • Other OpenAI-compatible: http://your-server:port/v1
  4. Enter the Custom Model ID — the model name your server uses:
    • Examples: llama3, mistral, codestral, deepseek-coder
    • This model appears in the Model dropdown with a "(Custom)" suffix
  5. (Optional) Check Skip Auth for Custom Endpoint if your local server doesn't require an API key

Using your custom model:

  • Once configured, select your custom model from the Model dropdown at the top of AI Configuration (or in the EUW model selector)
  • You can freely switch between your custom model and standard cloud models
  • Standard cloud models continue to work normally alongside the custom endpoint

Important notes:

  • Your custom server must be OpenAI API-compatible (accepting /chat/completions requests)
  • Trailing slashes in the URL are stripped automatically
  • Local models may be slower; the default timeout for custom models is 120 seconds

Screenshot Placeholder: [AI Configuration settings showing "Custom Endpoint" section with "Use Custom Endpoint" checked, URL field showing "http://localhost:11434/v1", Custom Model ID field showing "llama3", and "Skip Auth" checked]

AI Providers Category

Control which AI providers are available and manage API keys. Each provider is separated by a thin horizontal line, and sub-rows (API Key, Expiration, Models) collapse when the provider is disabled — they disappear entirely rather than being dimmed.

Screenshot Placeholder: [Settings showing AI Providers category with all 4 provider subsections visible: xAI, OpenAI, Anthropic, Google, separated by horizontal dividers]

Enable/Disable Providers

Each provider has a bold header with an enable/disable checkbox:

  • Enable xAI (Grok) — OFF by default
  • Enable OpenAI — OFF by default
  • Enable Anthropic (Claude) — OFF by default
  • Enable Google (Gemini) — OFF by default

All providers are disabled by default. Enable the ones you have API keys for.

Why disable providers?

  • You only have an API key for one provider
  • You want to limit model choices to reduce decision fatigue
  • You're enforcing cost controls (disable expensive providers)

When you disable a provider, all its models disappear from the Model ID dropdown, and the provider's Key / Expiration / Models rows collapse.

Per-Provider Settings

Each provider (xAI, OpenAI, Anthropic, Google) has its own subsection (visible only when the provider is enabled):

API Key Management:

  • Set API Key button: Opens secure input dialog (or inline text field)
  • Validate button: Tests the key with a quick API call
    • Green checkmark = valid key
    • Red X = invalid or expired key
  • Delete button: Removes the key from encrypted storage (secure wipe)

Key Expiry:

  • [Provider] Key Expiry (Days): 3-90 days, or -1 for unlimited
  • Default: 30 days
  • Timer starts from last validation
  • After expiry, Orion warns you to re-validate

Screenshot Placeholder: [xAI subsection showing API key field, Validate button with green checkmark icon, Delete button, and "xAI Key Expiry (Days)" slider set to 30]

Disabled Models

Disabled Models array:

  • List of model IDs to hide from dropdown (even if their provider is enabled)
  • Use case: "I have OpenAI enabled but I don't want expensive o1 models available—only show GPT-4o and GPT-4o mini"

How to use:

  1. Click the "+" button to add an entry
  2. Type the model ID exactly (e.g., "o1", "claude-opus-4")
  3. That model disappears from the Model ID dropdown

Screenshot Placeholder: [Disabled Models array showing entries: "o1", "claude-opus-4" with + and - buttons visible]

Interface Category

Control how Orion displays responses and what context it includes with queries.

Screenshot Placeholder: [Settings showing Interface category expanded]

Display Settings

Response Display Mode:

  • Rich Text (default): Formatted responses with bold, headers, code block styling
  • Plain Text: Unformatted text (easier to copy, no styling)

When to use Plain Text:

  • You're copying large code blocks frequently (no formatting artifacts)
  • You prefer minimal styling
  • Rich text rendering has issues (rare)

Add Spacing Between Response Lines:

  • Adds extra padding between lines in AI responses
  • Makes long responses easier to read
  • Recommended: ON (default)

Screenshot Placeholders:

  • [Side-by-side comparison: Left side shows Rich Text response with bold headers, inline code in gray boxes, and formatted code blocks. Right side shows same response in Plain Text with no formatting]

Integration

These toggles live under the Integration bold header (indented in the UI):

Enable Right-Click Menu:

  • Shows/hides "Azimuth Orion" submenu in Content Browser
  • Default: ON
  • Turn OFF if you rarely use right-click analysis (declutter context menu)

Graph Analysis Uses Deep Scan Mode:

  • When analyzing from Blueprint toolbar, use Deep Scan mode and show toast
  • Default: ON

Show Usage Tabulation (Experimental):

  • Displays token usage in chat window (estimated + actual from API)
  • Shows: Last request tokens, session total

Query Context

Context settings live under the Query Context bold header. Dependent sub-settings collapse (disappear entirely) when their parent toggle is off — they are not dimmed.

Note: Project name and engine version are always included in every query's system message automatically. There is no toggle for this; it is always active.

Include Last Log Lines in Query:

  • Prepends recent Output Log lines to each query
  • Essential for error troubleshooting
  • When enabled, the following sub-settings appear:
    • Last Log Lines Count: 1-200 lines (default: 20) — hidden when parent toggle is off
    • Send Full Log: Sends entire captured buffer instead of last N lines — hidden when parent toggle is off

When to enable log lines:

  • You have a compilation error and want AI to diagnose it
  • You have a runtime error/crash and need help
  • You're seeing warnings and want explanations

Example with log lines enabled:

Your Output Log shows:

Error: Blueprint BP_PlayerController (function UpdateHUD): Variable 'HealthWidget' used but not defined
Warning: Accessed None trying to read property 'HealthWidget'

You ask: "What's wrong?"

AI receives the log lines + your question, responds: "Based on your Output Log, you're trying to access a variable 'HealthWidget' that doesn't exist in the Blueprint. You need to add a variable named 'HealthWidget' (type: User Widget object reference) to BP_PlayerController."

Screenshot Placeholder: [Settings showing Include Last Log Lines in Query checkbox checked, Last Log Lines Count and Send Full Log visible below it]

Mode Context

Deep Scan Mode Context and Orbital (Ask) Mode Context — custom system message prompts for each mode (multi-line text fields).

Logging

Log Format, Log Directory, and Auto-export Chat Log — chat export settings.

QuickPrompts

QuickPrompts settings are now under the Interface category (previously a separate category).

Enable Custom QuickPrompts:

  • Load additional QuickPrompt libraries from a custom folder
  • When enabled, the following sub-setting appears:
    • Custom QuickPrompts Path — Folder containing UOrionQuickPromptLibrary assets — hidden when toggle is off

Blueprint Context Category

Controls how Blueprint analysis works.

Screenshot Placeholder: [Settings showing Blueprint Context category]

Compile On Send:

  • Enabled by default
  • Compiles Blueprint before sending to AI
  • Includes compiler results (errors/warnings) and defined variables in the context
  • Why it's useful: AI can diagnose missing variables, compilation errors
  • Disable if: You don't want automatic compilation (prefer manual control)

Context Send Display Message:

  • Template for what you see in chat when sending Blueprint context
  • Default: "Blueprint {BlueprintName} is being analyzed..."
  • Placeholders: {BlueprintName}, {GraphCount}, {NodeCount}, {PayloadLabel}
  • Customize if you want different wording

Context Send API Prompt:

  • Template for the actual prompt sent to the AI
  • Default: "Blueprint context for {BlueprintName}. Please acknowledge that you received it and ask what I would like to know about it."
  • Advanced users can customize to change AI behavior

EDA Integration Category

Export Blueprint context to Epic Developer Assistant (EDA).

Enable EDA Integration:

  • Default: ON
  • Shows "EDA" destination option in Blueprint toolbar Send dropdown
  • Allows exporting Blueprint context as JSON

EDA Context Export Path:

  • Where JSON files are saved
  • Default: Saved/AzimuthOrion/ContextExports/
  • Customize if you want a different location

Copy to Clipboard on EDA Export:

  • Default: ON
  • When exporting to EDA, also copies JSON to clipboard
  • Useful: Paste directly into EDA without opening the file

Screenshot Placeholder: [Settings showing EDA Integration category with Enable EDA Integration checked, export path field showing default, and clipboard copy checkbox checked]

Project Indexing Category

Note: Full project indexing is a planned feature. Currently available:

Use Cached Results When Analyzing Blueprints:

  • When enabled, uses cached Blueprint summaries if available
  • Faster analysis (no re-parsing if Blueprint hasn't changed)
  • Currently experimental