An immersive single-page web terminal that reveals Alexandre DO-O ALMEIDAβs rΓ©sumΓ© through typed commands. Everything is written in Rust, compiled to WebAssembly, and ships as a framework-free SPA for fast cold starts.
- ποΈ Web-only terminal UI with history, autocomplete, theming, achievements, and an AI Mode toggle.
- π¦ Static rΓ©sumΓ© data sourced from JSON so updates never require a recompile.
- π€ Optional AI concierge proxied through an Axum service that tracks spend limits (β€β―β¬0.50/min, β¬2/hour & day, β¬10/month) and now uses Retrieval-Augmented Generation (OpenAI embeddings + Pinecone + SQLite) to cite rΓ©sumΓ© snippets.
- π Build pipeline ships optimized WebAssembly + minified CSS in
static/, ready for any CDN with an optional Axum proxy.
- Retrieval:
python3 scripts/build_rag.pychunks every rΓ©sumΓ© JSON file, stores the canonical text instatic/data/rag_chunks.db, and mirrors the embeddings in Pinecone (1,536βdimtext-embedding-3-smallvectors). - Generation:
/api/aiembeds each user question, fetchestopK=4matches from Pinecone, rebuilds the prompt from SQLite, and sends it togpt-4o-mini(with Groq/Gemini fallbacks) while logging the chunk ids + similarity scores. - Transparency: every response returns a
context_chunksarray (id, source, topic, score) so tests and the UI can prove the answer was grounded instead of hallucinated.
.
βββ Cargo.toml # Workspace root + WebAssembly crate manifest
βββ VERSION # Project version (sync via scripts/bump_version.py)
βββ Makefile # Build/test/deploy entry points (wasm-pack, Netlify, logs)
βββ netlify.toml # Netlify deploy config (redirects, cache headers)
βββ .env.example # Template for required/optional environment variables
βββ src/ # Rust/WebAssembly terminal (commands, state, renderer)
βββ server/ # Axum proxy (OpenAI relay, rate limiting, static host)
βββ static/
β βββ index.html # Terminal shell + module bootstrap
β βββ style.css # Source stylesheet (minified to style.min.css)
β βββ style.min.css # Minified styles generated by make build
β βββ data/ # RΓ©sumΓ© JSON (profile, skills, experience, education, projects)
β βββ images/ # Logos & Open Graph artwork
β βββ icons/ # Favicons & manifest assets
β βββ effects/ # Visual flourish assets (canvas, particles, etc.)
β βββ cv/ # Standalone rΓ©sumΓ© viewer bundle
β βββ pkg/ # wasm-bindgen output mirrored from /pkg
βββ pkg/ # Raw wasm-pack artefacts (ignored by git)
βββ scripts/
β βββ bump_version.py # Bumps VERSION + Cargo manifests
β βββ live_smoke_test.py # Production smoke test
β βββ minify_css.py # CSS minifier invoked by make build
β βββ run_live_autotest.sh # CI helper for the smoke test
β βββ serve.py # Static dev server (writes server.log)
βββ screenshot_zqsdev.png # README showcase image
Install the prerequisites once:
rustup target add wasm32-unknown-unknown
cargo install wasm-packFetch dependencies and build the terminal bundle:
make buildInside the terminal, try:
help about skills experience
education projects testimonials contact
faq resume theme ai
clear
Flip on AI Mode with the toolbar button to ask natural-language questions. When disabled, helper chips provide quick access to the commands above.
# 1. Compile the WebAssembly bundle (writes static/pkg/)
make build
# 2. Run the full Rust stack (serves static assets + /api/ai)
export OPENAI_API_KEY=sk-your-key # required for AI mode
make serve # http://localhost:3000 by default
# Optional: static-only dev server (no AI proxy, logs to server.log)
make serve-static # http://localhost:8765 by defaultUseful overrides:
make serve HOST=127.0.0.1 SERVER_PORT=4000
make serve-static STATIC_PORT=9000make build always refreshes static/pkg/ and static/style.min.css, both of which must ship alongside the rest of static/ for deployment.
The AI concierge now pulls its context from a lightweight hybrid store:
python3 scripts/build_rag.py(ormake rag) parses every JSON file understatic/data/, chunks it, writesstatic/data/rag_chunks.db, and (optionally) upserts fresh embeddings into Pinecone. Add--skip-pineconeif you only want to refresh SQLite during local work.- Set
OPENAI_API_KEY,PINECONE_API_KEY, andPINECONE_HOST=https://<index>-<project>.svc.<region>.pinecone.iobefore running the proxy. Optional knobs:PINECONE_NAMESPACE,RAG_DB_PATH(defaults tostatic/data/rag_chunks.db),RAG_TOP_K,RAG_MIN_SCORE, andOPENAI_EMBEDDING_MODEL(defaulttext-embedding-3-small). - On each AI request, the server embeds the question via OpenAI, queries Pinecone for the top chunks, hydrates the canonical text from SQLite, and injects those snippets (tagged
[chunk-n]) into the LLM prompt so answers stay grounded and cite their sources.
The builder only depends on python3 and requests. Install the extra packages once with pip install requests if they are missing. make build automatically runs make rag at the end so your WASM artifacts and RAG bundle stay in sync. Set SKIP_RAG=1 make build if you need to bypass that step locally (e.g. when offline).
Inspect the bundled context with make rag-inspect, which prints per-source counts and a few sample chunk IDs.
After every deploy, run make autotest --base-url https://www.zqsdev.com (or your preview URL) to ensure the AI response includes context_chunks metadata, proving the RAG layer is active.
make test # wasm-pack test --node + cargo test for the proxy
make fmt # cargo fmt across the workspace
make check # cargo check --target wasm32-unknown-unknownThe CI pipeline should run the same trio so local runs stay in lockstep with automation.
Run make autotest (or directly python3 scripts/live_smoke_test.py) to exercise the deployed site once end-to-end. It validates the Netlify bundle, the /api/data payloads, and sends a single question to the AI concierge. Install the lone dependency with pip install requests, then wire either command into your scheduler (cron, GitHub Actions, etc.). Optional flags:
python3 scripts/live_smoke_test.py --json-output live-smoke.json
python3 scripts/live_smoke_test.py --ai-question "What's new with Alexandre?"
# Or via make (flags are forwarded):
make autotest AUTOTEST_FLAGS="--json-output live-smoke.json"The script exits non-zero on failure so monitors can trigger alerts. Leave the AI question count at one per run to respect the production rate limits. For cron jobs, scripts/run_live_autotest.sh wraps the Python entry point with sensible defaults.
If PUSHOVER_API_TOKEN and PUSHOVER_USER_KEY are present (in the environment, .env.local, or .env), the script will send a Pushover alert only when a check fails or the run ends early with skipped tests. Disable that behaviour with --no-pushover or AUTOTEST_FLAGS="--no-pushover" if needed.
- Copy the template:
cp .env.example .env
- Update at least
OPENAI_API_KEY=...if you plan to enable AI Mode locally.
OPENAI_API_KEY is the only required secret today. The template also reserves slots for GROQ_API_KEY, PUSHOVER_USER_KEY, and PUSHOVER_API_TOKEN so future integrations can reuse the same workflow. The proxy loads .env.local first, then .env, which keeps machine-specific overrides out of version control. Both files are ignored by git so real keys stay on your machine.
- β
Run
make buildandmake testbefore handing changes off sostatic/pkg/and the proxy both stay green. - β¬οΈ Bump the version with
python3 scripts/bump_version.py(touchesVERSION,Cargo.toml, andserver/Cargo.toml). The script defaults to patch releases; pass--minoror--majorwhen needed. - βοΈ Commit only the sources, regenerated assets under
static/pkg/, and version bumps. Artifacts in/pkg, local env files, and logs (server.log) are ignored by default.
- π Update the rΓ©sumΓ© link in
static/data/profile.json(links.resume_url). - π§Ύ Edit the JSON files in
static/data/to refresh profile details, experiences, and skills.
The server is optional at runtime; the public site is served from the static bundle.
βοΈ Netlify (www.zqsdev.com & zqsdev.com)
- π
netlify.tomlowns redirects so the SPA loads everywhere whilecv.zqsdev.comserves the rΓ©sumΓ© viewer andcalendly.zqsdev.comforwards to Calendly. - π
/api/*requests proxy through Netlify tohttps://api.zqsdev.com/api/:splat, keeping browser requests same-origin while hitting the Axum backend. - π Install the Netlify CLI (
npm install -g netlify-cli) and authenticate once withnetlify loginorNETLIFY_AUTH_TOKEN. - π
make deploy-previewβ runsmake buildthennetlify deploy --dir static --config netlify.toml. - π
make deploy-prodβ same flow with--prod. Pass extra flags viaNETLIFY_FLAGS(e.g.NETLIFY_FLAGS="--alias staging").
- π οΈ Run
make build. - π€ Publish the contents of
static/(includingstatic/pkg/) to your CDN or object store.
If you want AI Mode in production, deploy the proxy (e.g. on Fly.io, Railway, or a small VPS) with:
- π
OPENAI_API_KEYset. - βοΈ Optional
HOST,PORT, andSTATIC_DIRoverrides.
The proxy reads static/data/*.json at startup, forwards questions to gpt-4o-mini, and enforces spend ceilings before gracefully falling back to the classic terminal experience when limits trigger.
- π§Ύ Unit file:
/etc/systemd/system/zqs-terminal.serviceruns/opt/zqsdev/bin/zqs-terminal-serveras thezqsdevuser withWorkingDirectory=/opt/zqsdev. - π‘οΈ Environment lives in
/etc/zqsdev/server.env, includingHOST=0.0.0.0,PORT=8787,STATIC_DIR=/opt/zqsdev/static, and the API keys used at runtime. - ποΈ Manage the service with
sudo systemctl status|restart zqs-terminal.service; logs stream to/opt/zqsdev/backend.log(mirrored here as./backend.log) and viajournalctl -u zqs-terminal.service. - π‘ Run
make backend-logto tail the rolling log from the repository root. - π Public ingress:
api.zqsdev.comterminates TLS with Nginx (config at/etc/nginx/sites-enabled/api.zqsdev.com) and proxies to the Axum service on127.0.0.1:8787. - The binary listens on port
8787/tcp(/api/ai) and restarts automatically on failure.
Built with π¦ Rust and β€οΈ by Alexandre DO-O ALMEIDA (ZQSDev). Enjoy the terminal! π
