Skip to content

Aleqsd/zqsdev.com

Folders and files

NameName
Last commit message
Last commit date

Latest commit

Β 

History

87 Commits
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 

Repository files navigation

πŸ–₯️ ZQS Terminal β€” Rust RΓ©sumΓ© in Your Browser

Netlify Status WebAssembly Rust

An immersive single-page web terminal that reveals Alexandre DO-O ALMEIDA’s rΓ©sumΓ© through typed commands. Everything is written in Rust, compiled to WebAssembly, and ships as a framework-free SPA for fast cold starts.

ZQS Terminal landing screen

✨ Features

  • πŸŽ›οΈ Web-only terminal UI with history, autocomplete, theming, achievements, and an AI Mode toggle.
  • πŸ“¦ Static rΓ©sumΓ© data sourced from JSON so updates never require a recompile.
  • πŸ€– Optional AI concierge proxied through an Axum service that tracks spend limits (≀ €0.50/min, €2/hour & day, €10/month) and now uses Retrieval-Augmented Generation (OpenAI embeddings + Pinecone + SQLite) to cite rΓ©sumΓ© snippets.
  • πŸš€ Build pipeline ships optimized WebAssembly + minified CSS in static/, ready for any CDN with an optional Axum proxy.

πŸ€– AI Concierge Stack

  • Retrieval: python3 scripts/build_rag.py chunks every rΓ©sumΓ© JSON file, stores the canonical text in static/data/rag_chunks.db, and mirrors the embeddings in Pinecone (1,536‑dim text-embedding-3-small vectors).
  • Generation: /api/ai embeds each user question, fetches topK=4 matches from Pinecone, rebuilds the prompt from SQLite, and sends it to gpt-4o-mini (with Groq/Gemini fallbacks) while logging the chunk ids + similarity scores.
  • Transparency: every response returns a context_chunks array (id, source, topic, score) so tests and the UI can prove the answer was grounded instead of hallucinated.

πŸ—‚οΈ Repository Layout

.
β”œβ”€β”€ Cargo.toml            # Workspace root + WebAssembly crate manifest
β”œβ”€β”€ VERSION               # Project version (sync via scripts/bump_version.py)
β”œβ”€β”€ Makefile              # Build/test/deploy entry points (wasm-pack, Netlify, logs)
β”œβ”€β”€ netlify.toml          # Netlify deploy config (redirects, cache headers)
β”œβ”€β”€ .env.example          # Template for required/optional environment variables
β”œβ”€β”€ src/                  # Rust/WebAssembly terminal (commands, state, renderer)
β”œβ”€β”€ server/               # Axum proxy (OpenAI relay, rate limiting, static host)
β”œβ”€β”€ static/
β”‚   β”œβ”€β”€ index.html        # Terminal shell + module bootstrap
β”‚   β”œβ”€β”€ style.css         # Source stylesheet (minified to style.min.css)
β”‚   β”œβ”€β”€ style.min.css     # Minified styles generated by make build
β”‚   β”œβ”€β”€ data/             # RΓ©sumΓ© JSON (profile, skills, experience, education, projects)
β”‚   β”œβ”€β”€ images/           # Logos & Open Graph artwork
β”‚   β”œβ”€β”€ icons/            # Favicons & manifest assets
β”‚   β”œβ”€β”€ effects/          # Visual flourish assets (canvas, particles, etc.)
β”‚   β”œβ”€β”€ cv/               # Standalone rΓ©sumΓ© viewer bundle
β”‚   └── pkg/              # wasm-bindgen output mirrored from /pkg
β”œβ”€β”€ pkg/                  # Raw wasm-pack artefacts (ignored by git)
β”œβ”€β”€ scripts/
β”‚   β”œβ”€β”€ bump_version.py   # Bumps VERSION + Cargo manifests
β”‚   β”œβ”€β”€ live_smoke_test.py  # Production smoke test
β”‚   β”œβ”€β”€ minify_css.py     # CSS minifier invoked by make build
β”‚   β”œβ”€β”€ run_live_autotest.sh  # CI helper for the smoke test
β”‚   └── serve.py          # Static dev server (writes server.log)
└── screenshot_zqsdev.png # README showcase image

πŸš€ Quick Start

Install the prerequisites once:

rustup target add wasm32-unknown-unknown
cargo install wasm-pack

Fetch dependencies and build the terminal bundle:

make build

⌨️ Commands

Inside the terminal, try:

help        about       skills       experience
education   projects    testimonials contact
faq         resume      theme        ai
clear

Flip on AI Mode with the toolbar button to ask natural-language questions. When disabled, helper chips provide quick access to the commands above.

πŸ› οΈ Development Workflow

# 1. Compile the WebAssembly bundle (writes static/pkg/)
make build

# 2. Run the full Rust stack (serves static assets + /api/ai)
export OPENAI_API_KEY=sk-your-key   # required for AI mode
make serve                          # http://localhost:3000 by default

# Optional: static-only dev server (no AI proxy, logs to server.log)
make serve-static                   # http://localhost:8765 by default

Useful overrides:

make serve HOST=127.0.0.1 SERVER_PORT=4000
make serve-static STATIC_PORT=9000

make build always refreshes static/pkg/ and static/style.min.css, both of which must ship alongside the rest of static/ for deployment.

πŸ“š Retrieval-Augmented Answers

The AI concierge now pulls its context from a lightweight hybrid store:

  1. python3 scripts/build_rag.py (or make rag) parses every JSON file under static/data/, chunks it, writes static/data/rag_chunks.db, and (optionally) upserts fresh embeddings into Pinecone. Add --skip-pinecone if you only want to refresh SQLite during local work.
  2. Set OPENAI_API_KEY, PINECONE_API_KEY, and PINECONE_HOST=https://<index>-<project>.svc.<region>.pinecone.io before running the proxy. Optional knobs: PINECONE_NAMESPACE, RAG_DB_PATH (defaults to static/data/rag_chunks.db), RAG_TOP_K, RAG_MIN_SCORE, and OPENAI_EMBEDDING_MODEL (default text-embedding-3-small).
  3. On each AI request, the server embeds the question via OpenAI, queries Pinecone for the top chunks, hydrates the canonical text from SQLite, and injects those snippets (tagged [chunk-n]) into the LLM prompt so answers stay grounded and cite their sources.

The builder only depends on python3 and requests. Install the extra packages once with pip install requests if they are missing. make build automatically runs make rag at the end so your WASM artifacts and RAG bundle stay in sync. Set SKIP_RAG=1 make build if you need to bypass that step locally (e.g. when offline).

Inspect the bundled context with make rag-inspect, which prints per-source counts and a few sample chunk IDs.

After every deploy, run make autotest --base-url https://www.zqsdev.com (or your preview URL) to ensure the AI response includes context_chunks metadata, proving the RAG layer is active.

βœ… Tests & Quality Gates

make test   # wasm-pack test --node + cargo test for the proxy
make fmt    # cargo fmt across the workspace
make check  # cargo check --target wasm32-unknown-unknown

The CI pipeline should run the same trio so local runs stay in lockstep with automation.

🩺 Live Production Smoke Test

Run make autotest (or directly python3 scripts/live_smoke_test.py) to exercise the deployed site once end-to-end. It validates the Netlify bundle, the /api/data payloads, and sends a single question to the AI concierge. Install the lone dependency with pip install requests, then wire either command into your scheduler (cron, GitHub Actions, etc.). Optional flags:

python3 scripts/live_smoke_test.py --json-output live-smoke.json
python3 scripts/live_smoke_test.py --ai-question "What's new with Alexandre?"
# Or via make (flags are forwarded):
make autotest AUTOTEST_FLAGS="--json-output live-smoke.json"

The script exits non-zero on failure so monitors can trigger alerts. Leave the AI question count at one per run to respect the production rate limits. For cron jobs, scripts/run_live_autotest.sh wraps the Python entry point with sensible defaults.

If PUSHOVER_API_TOKEN and PUSHOVER_USER_KEY are present (in the environment, .env.local, or .env), the script will send a Pushover alert only when a check fails or the run ends early with skipped tests. Disable that behaviour with --no-pushover or AUTOTEST_FLAGS="--no-pushover" if needed.

πŸ”‘ Environment Variables

  1. Copy the template:
    cp .env.example .env
  2. Update at least OPENAI_API_KEY=... if you plan to enable AI Mode locally.

OPENAI_API_KEY is the only required secret today. The template also reserves slots for GROQ_API_KEY, PUSHOVER_USER_KEY, and PUSHOVER_API_TOKEN so future integrations can reuse the same workflow. The proxy loads .env.local first, then .env, which keeps machine-specific overrides out of version control. Both files are ignored by git so real keys stay on your machine.

πŸ“¦ Versioning & Release Workflow

  • βœ… Run make build and make test before handing changes off so static/pkg/ and the proxy both stay green.
  • ⬆️ Bump the version with python3 scripts/bump_version.py (touches VERSION, Cargo.toml, and server/Cargo.toml). The script defaults to patch releases; pass --minor or --major when needed.
  • ✍️ Commit only the sources, regenerated assets under static/pkg/, and version bumps. Artifacts in /pkg, local env files, and logs (server.log) are ignored by default.

🎨 Customising the Résumé

  • πŸ”— Update the rΓ©sumΓ© link in static/data/profile.json (links.resume_url).
  • 🧾 Edit the JSON files in static/data/ to refresh profile details, experiences, and skills.

🚒 Deployment

The server is optional at runtime; the public site is served from the static bundle.

☁️ Netlify (www.zqsdev.com & zqsdev.com)

  • 🌐 netlify.toml owns redirects so the SPA loads everywhere while cv.zqsdev.com serves the rΓ©sumΓ© viewer and calendly.zqsdev.com forwards to Calendly.
  • πŸ” /api/* requests proxy through Netlify to https://api.zqsdev.com/api/:splat, keeping browser requests same-origin while hitting the Axum backend.
  • πŸ” Install the Netlify CLI (npm install -g netlify-cli) and authenticate once with netlify login or NETLIFY_AUTH_TOKEN.
  • πŸš€ make deploy-preview β†’ runs make build then netlify deploy --dir static --config netlify.toml.
  • πŸ” make deploy-prod β†’ same flow with --prod. Pass extra flags via NETLIFY_FLAGS (e.g. NETLIFY_FLAGS="--alias staging").

🏑 Self-hosting (optional)

  1. πŸ› οΈ Run make build.
  2. πŸ“€ Publish the contents of static/ (including static/pkg/) to your CDN or object store.

If you want AI Mode in production, deploy the proxy (e.g. on Fly.io, Railway, or a small VPS) with:

  • πŸ”‘ OPENAI_API_KEY set.
  • βš™οΈ Optional HOST, PORT, and STATIC_DIR overrides.

The proxy reads static/data/*.json at startup, forwards questions to gpt-4o-mini, and enforces spend ceilings before gracefully falling back to the classic terminal experience when limits trigger.

🧭 Systemd service (production)

  • 🧾 Unit file: /etc/systemd/system/zqs-terminal.service runs /opt/zqsdev/bin/zqs-terminal-server as the zqsdev user with WorkingDirectory=/opt/zqsdev.
  • 🌑️ Environment lives in /etc/zqsdev/server.env, including HOST=0.0.0.0, PORT=8787, STATIC_DIR=/opt/zqsdev/static, and the API keys used at runtime.
  • πŸ›ŽοΈ Manage the service with sudo systemctl status|restart zqs-terminal.service; logs stream to /opt/zqsdev/backend.log (mirrored here as ./backend.log) and via journalctl -u zqs-terminal.service.
  • πŸ“‘ Run make backend-log to tail the rolling log from the repository root.
  • πŸ” Public ingress: api.zqsdev.com terminates TLS with Nginx (config at /etc/nginx/sites-enabled/api.zqsdev.com) and proxies to the Axum service on 127.0.0.1:8787.
  • The binary listens on port 8787/tcp (/api/ai) and restarts automatically on failure.

Built with πŸ¦€ Rust and ❀️ by Alexandre DO-O ALMEIDA (ZQSDev). Enjoy the terminal! πŸ™‚