LLM Providers ============= This application uses `OpenRouter`_ as the unified API gateway to access various Large Language Models from different providers. OpenRouter provides a single interface to access models from multiple LLM providers, simplifying integration and offering competitive pricing. .. _OpenRouter: https://openrouter.ai/ The application supports several models from different providers, each optimized for different use cases: Google Models ~~~~~~~~~~~~~ `Google`_ provides the Gemini family of models through OpenRouter: * `gemini-2.5-pro `_: High-quality Gemini model for complex reasoning and analysis; excellent for interpretation tasks. *Zero data retention.* * `gemini-2.5-flash `_: Fast and efficient Gemini model; good balance of speed and quality for most tasks. *Zero data retention.* **Data Policy**: Google Gemini models have a **zero data retention policy** - your data is not stored or used for training. .. _Google: https://gemini.google.com/ OpenAI Models ~~~~~~~~~~~~~ `OpenAI`_ provides the GPT family of models through OpenRouter: * `gpt-5 `_: Highest quality and most advanced model; best for complex interpretation and summarization, but more resource-intensive and slower than mini/nano. * `gpt-5-mini `_: Optimized for speed and efficiency; suitable for most workflows with a balance of quality and cost. * `gpt-5-nano `_: Fastest and lowest cost for simple extraction or tagging; not for detailed interpretations. **Data Policy**: OpenAI stores data for **30 days** for abuse detection purposes only. Your data is **not used for training** the models. .. _OpenAI: https://openai.com/ Mistral Models ~~~~~~~~~~~~~~ `Mistral AI `_ provides the Mistral family of models through OpenRouter: * `mistral-small-3.2 `_: Efficient Mistral model for balanced performance; good for various text processing tasks. *Zero data retention.* **Data Policy**: Mistral AI follows a **zero data retention policy** - your data is not stored or used for training. Custom Model Configuration -------------------------- For users who want to use their own models or custom endpoints, the application supports custom model configuration. You need to set the following environment variables: * ``LLM_CUSTOM_API_KEY``: Your custom model API key (not required for local models) * ``LLM_CUSTOM_API_ENDPOINT``: The API endpoint URL for your custom model (e.g. ``https://api.yourmodelprovider.com/v1/chat/completions``) * ``LLM_CUSTOM_MODEL``: The model name/identifier for your custom model (e.g. ``your-model-name``) We cannot guarantee compatibility for custom models, as they may have different API specifications.