Skip to content

Add support for dual-AI provider (Anthropic + OpenAI) with user selection #77

@Hebx

Description

@Hebx

Description

Mallory currently uses Anthropic’s SDK (ANTHROPIC_API_KEY) for AI chat.

Add support for OpenAI (OPENAI_API_KEY) and let users choose the provider per chat session (Anthropic ↔ OpenAI). Keep backward compatibility and make the system extensible for future providers.

Project specification & scope

Backend

  • Identify where Anthropic is used (likely apps/server/*, chat endpoint).
  • Create a provider abstraction layer (e.g. apps/server/lib/ai/):
    • Define a common interface (chat, stream if required).
  • Implement providers:
    • openai.ts — implement interface using the openai npm package.
    • anthropic.ts — refactor existing Anthropic usage (@anthropic-ai/sdk) to implement the same interface.
  • Add provider registry and selector: getAIClient(providerName: string).
  • Update /api/chat:
    • Accept provider: "openai" | "anthropic" in request body (fallback to default env).
    • Route requests through getAIClient(provider).
  • Update environment:
    • Add OPENAI_API_KEY to .env.example.
    • Keep ANTHROPIC_API_KEY for backwards compatibility.

Frontend

  • Add a UI control (dropdown/toggle) to select provider per session.
  • Show active provider in the UI (e.g., badge: Powered by GPT-4o / Powered by Claude).
  • Send selected provider to backend in chat request payload.

Testing

  • Unit tests for provider modules (with mocked API responses).
  • Integration tests for /api/chat routing with providers mocked.
  • E2E test verifying frontend provider selection flows to backend and returns responses.
  • Error & fallback tests (optional).

Docs

  • Update README.md and .env.example:
    • Document OPENAI_API_KEY and ANTHROPIC_API_KEY.
    • Show example payload: { provider: "openai" }.
    • Note model name mapping and provider-specific caveats.

Optional / Extras

  • Fallback logic: if selected provider errors/outage, optionally retry with other provider (configurable).
  • Instrumentation: log provider used, latency, errors.
  • Add more providers later (Mistral, Gemini) using same interface.

Acceptance criteria

  • Server has a provider abstraction layer.
  • openai provider implemented; OPENAI_API_KEY supported.
  • anthropic provider refactored to same interface.
  • /api/chat accepts provider and routes requests accordingly.
  • Frontend allows user to select provider and sends selection to backend.
  • Tests cover both provider code paths.
  • README.md and .env.example updated.
  • (Optional) Fallback logic implemented and documented.

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions