AI Copilot Architecture
Draft design notes for the Darksun AI Copilot integration. Update as the implementation evolves.
Objectives
- Provide a conversational assistant that can manipulate the simulation just like the user interface.
- Keep user data and AI prompts private per authenticated Stack Auth user.
- Favour the OpenAI GPT‑5 Responses API with function/tool calling, avoiding the legacy completions routes.
- Reuse existing physics/UI controllers so the copilot performs the same validated actions as the UI.
High-Level Flow
Frontend (Darksun app)
- Maintains chat state in a dedicated store and renders a
DraggableWindow-based Copilot window. - Sends user messages to the backend via authenticated requests and executes tool calls returned by the backend.
- Uses existing controllers (
SpacecraftController,TimeController,maneuverActions, etc.) to execute tools, returning structured results to the backend.
- Maintains chat state in a dedicated store and renders a
Backend (FastAPI stack service)
- Validates Stack Auth tokens with existing
get_current_user.
- Validates Stack Auth tokens with existing
- Hosts
/api/v1/copilot/*endpoints:POST /sessions– create/resume a conversation session scoped to user id.POST /respond– forward the latest transcript to OpenAI, receive tool calls or assistant text.
- Maintains short-lived per-user session cache (in-memory or Redis/postgres table) without storing the raw conversation log unless explicitly enabled.
- Uses the latest
openaiPython SDK (OpenAIclient) and the Responses API, providing function definitions for each tool. - Filters tool call outputs to omit sensitive data before returning to the frontend.
- OpenAI Responses API
- Model:
gpt-5.0, configured with bounded tool list. - Temperature tuned for operations (low to medium), with context instructions describing available tools and privacy constraints.
- Model:
Tool Catalogue
All tools are exposed as structured functions. Each implementation lives frontend-side, ensuring the AI never bypasses domain validation.
| Tool | Description | UI Reuse | Reference |
|---|---|---|---|
list_spacecraft | Returns spacecraft roster, orbital elements, and recent propagation metadata. | Uses useSpacecraft, OrbitCache. | — |
get_spacecraft_properties | Retrieves a single spacecraft with orbit summaries, perturbation stats, and optional full propagation samples and maneuver/burn details. | Uses usePhysics, useUI, useSpacecraft. | — |
create_spacecraft | Creates a spacecraft via SpacecraftController.createSpacecraft. | Accepts same payload as creation wizard. | — |
plan_maneuver | Adds maneuver nodes (delta-v in km/s) and returns propagated orbit segments. | Wraps maneuverActions.addManeuverNode and requestSpacecraftOrbitData. | Include docs/mission-planner-scripting.md in the model prompt so the assistant understands script syntax, keywords (periapsis, apoapsis, :circular, :start, etc.), and unit expectations. |
set_sim_time | Changes the active simulation epoch. | Calls TimeController.setSimulationTime. | — |
get_celestial_body_info | Exposes the current physical/orbital state of any registered celestial body. | Mirrors CelestialBodyInfoWindow helpers via CelestialBodyRegistry. | — |
update_spacecraft | Renames or edits staging properties. | Leverages stage editor helpers (StagesSlice). | — |
delete_spacecraft | Removes spacecraft with confirmation. | Calls SpacecraftController.removeSpacecraft. | — |
Each tool returns:
ts
type ToolResult = {
ok: boolean
message: string
data?: unknown
telemetry?: Record<string, number | string>
}The backend simply forwards this to OpenAI as the tool result.
Privacy & Data Handling
- Backend requires an authenticated Stack Auth bearer token for all copilot endpoints.
- Conversations are scoped by
(user_id, session_id). No cross-user access. - We do not persist prompt/response history by default; only session ids and timestamps may be stored for rate limiting. If persistence is introduced later, encrypt sensitive payloads at rest.
- OpenAI API key is stored only on the backend. The frontend never sees it.
- Streaming responses are proxied as Server-Sent Events (future enhancement) without exposing intermediate tokens outside the authenticated request.
- System prompts should concatenate the tool schema with curated reference docs, especially
docs/mission-planner-scripting.md, so the model is aware of Mission Planner scripting conventions.
Failure Handling
- If OpenAI returns a tool call that is unregistered or invalid, the backend converts it to an assistant message prompting the user to retry.
- Frontend tool execution failure → send
ok: falsewith error message; model can apologize/ask for corrections. - Network or rate-limit errors bubble back as user-visible notifications with retry guidance.
- Implement exponential backoff (client or server) for OpenAI failures.
Simulation Reference Notes for Prompts
- Epoch –
epochMsis seeded fromDate.now(); sim timet = 0aligns with the Unix epoch (UTC milliseconds). - Global frame – All state vectors live in ECLIPJ2000 (right-handed ecliptic frame at J2000).
+X: points from the Sun toward Earth at the vernal equinox (J2000).+Z: north ecliptic pole.+Y: completes the right-handed set.
- Propagation model – Orbits are propagated forward from the current spacecraft state using high-fidelity n-body integration. Maneuver nodes execute sequentially (KSP-style) and recompute the orbit after each burn. Perturbations include primary gravity, third-body gravity, J2/J3 harmonics when available, solar radiation pressure, atmospheric drag (when applicable), and user-defined thrust segments.
- Body tilts – Every planet/moon orientation quaternion matches IAU 2025 pole/prime meridian data, so Earth’s ~23.44° obliquity (and other tilts) are baked into
CelestialBody.orientation. - Mention these facts in backend prompts so the AI reasons about time/frame conversions correctly.
Open Questions / Next Steps
- Decide whether to persist session transcripts for analytics (opt-in only).
- Evaluate need for streaming UI once basic loop works.
- Confirm whether the stack service should proxy any other SaaS (e.g., vector search) for copilot memory.
- Add integration tests hitting
/api/v1/copilot/respondwith mocked OpenAI client.