API Configuration

Configure your OpenAI-compatible API endpoint for bias analysis. Supports local models (Ollama), cloud services, or any compatible API.

API Not Configured
Configure your API settings to start analyzing text for bias.
Provider
Select a predefined provider or configure a custom endpoint

For Ollama: http://localhost:11434/v1 • For LM-Studio: http://localhost:8000/v1

Quick Setup - Local Services
Auto-detect and connect to Ollama or LM-Studio running on your machine (checks common ports: Ollama 11434/11435/8080, LM-Studio 8000/8001/1234)

No services found. If you have Ollama or LM-Studio running on a custom port, manually configure below.

Manual Local Setup
Configure Ollama or LM-Studio with custom port and model name

Ollama default: 11434 • LM-Studio default: 8000

Enter any model name that's installed on your local service

Authentication
Your API key is encrypted in local storage and never exposed to external services

🔒 Encrypted before storage • For remote APIs only • Local services don't need an API key

Optional for local services: Leave blank for Ollama/LM-Studio, or enter any placeholder value

Model Configuration
Select or specify the model to use for analysis

Enter the exact model identifier (e.g., llama3.2, mistral, gpt-4o)

Lower values are more deterministic, higher values more creative

News API Configuration
Configure free news API keys to fetch latest articles for analysis

Highest priority source for news fetching

uspkgbcain

Get a free key at gnews.io (100 free requests/day)

Get a free key at newsapi.org (Developer tier)

Get a free key at currentsapi.services

Quick Setup Guides

Ollama (Local)

  1. Install Ollama from ollama.ai
  2. Run: ollama serve
  3. Pull any model (e.g.: ollama pull llama3.2)
  4. Endpoint: http://localhost:11434/v1 (or custom port)
  5. Model: Use any installed model name
  6. API Key: Use any value (e.g., "local")

LM-Studio (Local)

  1. Install LM-Studio from lmstudio.ai
  2. Load a model in LM-Studio
  3. Start the local server (port 8000 by default)
  4. Endpoint: http://localhost:8000/v1 (or custom port)
  5. Model: Use loaded model name
  6. API Key: Use any value (e.g., "local")