Yapless is a multi-mode AI chat system with integrated Web Search (Web, Reddit, Wikipedia) and Retrieval-Augmented Generation (RAG) capabilities.
It follows a microservice + macroservice architecture for scalability and modularity.
| Service | Tech Stack | Role |
|---|---|---|
| Client | React.js + TypeScript | Frontend UI |
| Main Service | Node.js + TypeScript | API Gateway & Orchestration |
| Web Search Service | Node.js + TypeScript | Search + Scraping |
| LLM Service | Python + FastAPI + Chroma | Vectorization + Gemini LLM |
- Multiple chat modes:
detailed,sarcastic,auto - Web, Reddit, and Wikipedia search
- Multi-tier scraping (Readability → Cheerio → Puppeteer)
- Vector storage & retrieval with Chroma DB
- Fully containerized (Docker Compose)
- Can also run without Docker for development or debugging
Choose one setup method:
- Docker >= 20.x
- Docker Compose >= 2.x
- Node.js >= 18
- Python >= 3.10
- npm / yarn
- pip
Each service requires its own .env file.
For each service, there is a .env.sample file that can be used as a template for setting up your environment variables.
Copy the .env.sample to .env (or .env.local for the client) and update it with your own values.
- Clone the repository
git clone https://github.com/yourusername/yapless.git
cd yapless-
Set up environment variables for all services using the provided
.env.samplefiles. -
Run the stack
docker compose up --build- Access the app
Frontend:
http://localhost:3000Main API:http://localhost:8080
Run each service in its own terminal:
pip install chromadb
chromadb run --path ./data/chromacd services/web-search
cp .env.sample .env
npm install
npm run devcd services/llm-service
cp .env.sample .env
pip install -r requirements.txt
uvicorn app:app --reload --host 0.0.0.0 --port 8000cd services/main-service
cp .env.sample .env
npm install
npm run devcd client
cp .env.sample .env.local
npm install
npm run devPOST /chat→ Chat with AI (with/without search)GET /health→ Health check
POST /search→ Web/Reddit/Wikipedia searchPOST /scrape→ Scrape and clean a URL
POST /vectorize→ Store document vectorsPOST /query→ Query LLM with retrieval