Using Vision (Screenshots)
Vision-Capable Models
Not all AI models can "see" images. Vision-capable models can analyze screenshots you send.
Models that support vision:
- xAI: grok-vision
- OpenAI: GPT-4o, GPT-4 Turbo, GPT-4V (older)
- Anthropic: All Claude 3 and 4 models (Opus 4, Sonnet 4.5, Sonnet 3.5, Haiku 4)
- Google: Gemini Pro Vision, Gemini Pro 1.5, Gemini Flash 1.5
Models that DO NOT support vision:
- grok-code-fast-1, grok-3, grok-4, grok-4-turbo
- GPT-3.5 Turbo, o1, o1-mini, o3-mini
- (Check Appendix A for full list)
To use vision features, first select a vision-capable model in Settings > AI Configuration > Model ID.
Capturing Blueprint Editor Window
When you have a visual question about a Blueprint graph, send a screenshot.
Step-by-step:
- Select a vision-capable model in settings (e.g., GPT-4o, Grok Vision)
- Open a Blueprint in the editor
- Look for the Orion toolbar (top-right area)
- Click the Screenshot button (leftmost button, up arrow icon)
- Toast notification appears: "Screenshot captured successfully. It will be included with your next Orion query." (shows for 6 seconds)
- Ask your question in the Orion chat window: "What's wrong with this node setup?" or "Is this event flow correct?"
- Click Send
- The AI receives both your text and the screenshot
- AI responds with visual analysis, referencing specific nodes or connections it sees
Screenshot Placeholders:
[Blueprint editor window with Orion toolbar visible, Screenshot button (leftmost, up arrow icon) highlighted with red circle][Toast notification overlay in bottom-right corner of screen showing "Screenshot captured successfully. It will be included with your next Orion query." with green checkmark icon][Orion chat window showing a user query "Is this event flow correct?" followed by an AI response that references specific nodes visible in a screenshot, e.g., "Looking at your screenshot, the Branch node's True pin should connect to SpawnActor, not the False pin..."]
Vision Model Check
What if your current model doesn't support vision?
If you capture a screenshot but you're using a non-vision model (e.g., grok-code-fast-1):
- Orion detects the mismatch
- Shows a friendly message in chat:
"The currently selected model (grok-code-fast-1) does not support images. Please select a vision-capable model such as:
- Grok Vision
- GPT-4o
- Claude 3.5 Sonnet
- Gemini Pro Vision
Then try capturing the screenshot again."
- The screenshot is automatically cleared (not sent to the API)
- Switch to a vision model in settings, capture again
Screenshot Placeholder: [Chat window showing the vision compatibility warning message with the list of recommended models]
Sample Workflow: Debug Visual Blueprint Layout
Scenario: Your Blueprint has a complex branching flow. It's hard to describe in text which pin connects where.
- Switch to a vision model: Settings > Model ID > Select "GPT-4o"
- Open your Blueprint:
BP_QuestSystem - Arrange the graph so the problematic area is visible (zoom/pan as needed)
- Click the Screenshot button in Orion toolbar
- Toast confirms: "Screenshot captured successfully..."
- Go to Orion chat and type: "Is this event flow correct? Should the branch connect here or should it go to the other node?"
- Click Send
- AI examines your screenshot and responds:
"Looking at your screenshot, I can see the CheckQuestStatus event with a Branch node. Currently:
- Branch's True pin → CompleteQuest function
- Branch's False pin → (disconnected)
The logic appears inverted. If the quest status check returns False (quest not complete), you probably want to take action. Connect the False pin to UpdateQuestProgress or similar. The True pin (quest already complete) should probably do nothing or show a message. As currently wired, completing the quest only happens when the check is True, which may not be your intent."
- You realize: The True/False pins are swapped!
- Fix the connections, test, it works correctly now
Screenshot Placeholder: [Blueprint graph screenshot showing a CheckQuestStatus event node connected to a Branch node, with True/False pins connected to CompleteQuest and UpdateQuestProgress nodes, with annotations showing the AI's analysis of the flow]