AI Settings
Configure AI providers, models, and per-project settings
Archflow supports multiple AI providers through a Bring Your Own Key (BYOK) system. Configure your preferred providers and models for documentation generation, Archie conversations, and architecture analysis.
Supported Providers
| Provider | Example Models | Notes |
|---|---|---|
| OpenAI | GPT-4o, GPT-4o-mini | Most widely used |
| Anthropic | Claude 4.5 Sonnet, Claude 4.6 Opus | Excellent for documentation |
| Azure OpenAI | Azure-hosted GPT models | Enterprise deployments with data residency |
| Mistral | Mistral Large, Mistral Medium | Cost-effective European provider |
| OpenRouter | Access to 100+ models | Single key for multiple providers |
| HuggingFace | Open-source models | Self-hosted or API access |
Account-Level Settings
Adding a Provider
- Go to Profile → AI Settings
- Click Add Provider
- Select the provider type from the dropdown
- Enter your API key
- Click Test Connection to verify the key works
- Save the configuration
Model Selection
Once a provider is configured, select which models to use for different tasks:
- Chat model --- Used for Archie conversations and general AI interactions
- Tool model --- Used for tool calling and structured outputs (may differ from chat model)
- Embedding provider --- Used for semantic search and knowledge retrieval
Managing Multiple Providers
You can configure multiple providers simultaneously:
- Add providers from different vendors for redundancy
- Use different models for different tasks (e.g., a fast model for chat, a capable model for documentation generation)
- Remove or update providers as needed
Security
API keys are encrypted at rest using AES-256-GCM encryption. Keys are never exposed in the UI after initial entry --- only a masked preview is shown.
Per-Project Settings
Projects can override your default AI settings:
- Open the project Settings page
- Navigate to AI configuration
- Select project-specific model preferences
- These settings take priority over your account-level defaults
This is useful when different projects have different requirements --- for example, using a more capable model for a complex enterprise architecture project.
AI Status
Check the status of your AI configuration:
- Profile page shows which providers are configured and active
- Connection test verifies your API keys are working
- Model availability shows which models you can access with your configured keys
Usage Monitoring
Track your AI usage from Profile → Usage:
- Monthly AI action count and limit
- Breakdown by tool type (documentation, chat, analysis, etc.)
- Top tools by usage
- Lifetime cumulative usage