Everything inside PromptRunner is built for production-grade prompt engineering workflows.
PromptRunner mirrors real-world software projects: Applications contain Features, each Feature owns Guides, Conversations, and Templates. Strongly typed IDs (ApplicationId, FeatureId, GuideId, ConversationId) and aggregate roots enforce data integrity while JSON folders in %LOCALAPPDATA%\PromptRunner\ keep everything portable.
Each record tracks CreatedAt/LastModified timestamps, so you always know who changed what and when.
Work against every major LLM vendor without leaving the workspace:
Slash commands (/model, /models, /test, /provider, /clear, /toguide, /tonotes, /help) and named HttpClients keep model switching instant while IAsyncEnumerable streams arrive token by token.
Every conversation references a PromptConfiguration that captures how the AI should think and respond:
The `GenerateCompletePrompt` method assembles consistent structured prompts so every model run starts with the exact intent you designed.
Attach context directly to each feature:
Guides stay linked to their feature so prompts always run with the right evidence.
PromptConversation aggregates persist every iteration under `prompts/{conversation-id}` with metadata.json (title, explanation, timestamps) and messages.json (role, content, tokens). Load any run, continue the thread, and keep a searchable history of what shipped.
Use cases include iteration tracking, knowledge sharing, onboarding playbooks, and building evaluation datasets.
PromptTemplate entities live beside each feature with YAML metadata, `$ARGUMENTS` placeholders, and optional configuration overrides. The repository pattern exposes CRUD operations so teams can standardize code review prompts, QA scripts, research checklists, and more.
Optional sync for Google Drive and OneDrive uses PKCE-based OAuth flows with localhost callbacks. Store credentials in `%APPDATA%\PromptRunner\settings\drive-config.json`, authenticate once, and push/pull entire workspaces with upload/download progress, retry logic, and timestamped logs.
Local-first remains untouched-sync is a deliberate opt-in action with clear status indicators.
A 3-column layout (Tree + Lists / Chat / Markdown) uses a custom 13-shade greyscale palette, Consolas typography, and keyboard-first operations (Ctrl+1/2/3 focus panes, `m` for new guide, `p` for conversation, `Enter` to send, `/` commands, Ctrl+S to save). Everything feels like a focused terminal session while still providing modern UI touches.
PromptRunner keeps configuration explicit: `llm-config.json` stores provider defaults, models, and max tokens, while `drive-config.json` stores sync providers and OAuth tokens. Settings UI exposes provider toggles, credential inputs, sync test buttons, token expiry readouts, and log viewers.
Domain, Application, Infrastructure, and Presentation layers enforce separation of concerns, while dependency injection wires repositories, services, and providers.
ILLMProvider implementations stream via IAsyncEnumerable, parse SSE payloads, and plug into HttpClient factories with timeouts, retries, and connection pooling.
CommunityToolkit.Mvvm powers observable properties, RelayCommand actions, and WeakReferenceMessenger events across MAUI or Avalonia shells with compiled bindings for speed.
No database required-everything is JSON, Markdown, or binary assets stored beside metadata. Structure mirrors the domain for easy backups and diffing.
Left pane handles apps/features plus Guides/Conversations lists, the center pane streams chat, and the right pane edits markdown with previews, attachments, and note fields.
Ctrl+1/2/3 focus panes, `A`/`N` manage trees, `m` creates guides, `p` starts conversations, `/model` flips providers, and Ctrl+S saves markdown instantly.
Async operations keep the UI fluid, streaming never blocks typing, and compiled bindings plus virtualization keep large project trees snappy.
Get in touch to see PromptRunner in detail, request early access, or influence the roadmap.
Contact Us