API Configuration
Configure your OpenAI-compatible API endpoint for bias analysis. Supports local models (Ollama), cloud services, or any compatible API.
For Ollama: http://localhost:11434/v1 • For LM-Studio: http://localhost:8000/v1
No services found. If you have Ollama or LM-Studio running on a custom port, manually configure below.
Ollama default: 11434 • LM-Studio default: 8000
Enter any model name that's installed on your local service
🔒 Encrypted before storage • For remote APIs only • Local services don't need an API key
Optional for local services: Leave blank for Ollama/LM-Studio, or enter any placeholder value
Enter the exact model identifier (e.g., llama3.2, mistral, gpt-4o)
Lower values are more deterministic, higher values more creative
Highest priority source for news fetching
Get a free key at gnews.io (100 free requests/day)
Get a free key at newsapi.org (Developer tier)
Get a free key at currentsapi.services
Quick Setup Guides
Ollama (Local)
- Install Ollama from ollama.ai
- Run:
ollama serve - Pull any model (e.g.:
ollama pull llama3.2) - Endpoint: http://localhost:11434/v1 (or custom port)
- Model: Use any installed model name
- API Key: Use any value (e.g., "local")
LM-Studio (Local)
- Install LM-Studio from lmstudio.ai
- Load a model in LM-Studio
- Start the local server (port 8000 by default)
- Endpoint: http://localhost:8000/v1 (or custom port)
- Model: Use loaded model name
- API Key: Use any value (e.g., "local")