Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
91 changes: 84 additions & 7 deletions examples/edge/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -156,6 +156,64 @@ gossipNode.join_swarm(relayUrl); // Eventually consistent, Byzantine-tolerant

---

### Web Workers: Keep Your UI Responsive

When you're running vector searches on thousands of vectors or encrypting large messages, you don't want your app to freeze. Web Workers solve this by running heavy operations in background threads while your UI stays smooth.

**The problem without workers:**
```javascript
// This blocks your UI - buttons won't click, animations freeze
const results = await vectorDB.search(query, k); // 100ms+ blocking
```

**The solution with WorkerPool:**
```javascript
import { WorkerPool } from '@ruvector/edge/worker-pool';

// Create a pool of background workers (auto-detects CPU cores)
const pool = new WorkerPool(
new URL('@ruvector/edge/worker', import.meta.url),
new URL('@ruvector/edge/ruvector_edge.js', import.meta.url),
{ dimensions: 128, metric: 'cosine', useHnsw: true }
);

await pool.init(); // Workers load WASM in parallel

// Now searches run in background - UI stays responsive!
const results = await pool.search(queryVector, 10);

// Insert 10,000 vectors? Workers split the work automatically
const ids = await pool.insertBatch(largeDataset); // Parallel insertion

// Search multiple queries at once
const allResults = await pool.searchBatch(queries, 10); // Parallel search
```

**What the Worker Pool does for you:**

| Feature | What It Means |
|---------|---------------|
| **Auto-scaling** | Creates workers based on your CPU cores (2-8 typically) |
| **Load balancing** | Distributes work evenly across workers |
| **Batch splitting** | Large datasets are chunked and processed in parallel |
| **Timeout handling** | Stuck operations fail gracefully after 30 seconds |
| **Error recovery** | One failing worker doesn't crash your whole app |

**When to use workers:**

| Scenario | Use Workers? | Why |
|----------|--------------|-----|
| 100+ vectors | Maybe | Small searches are fast enough inline |
| 1,000+ vectors | Yes | Noticeable speedup from parallelism |
| 10,000+ vectors | Definitely | 3-4x faster with worker pool |
| Batch inserts | Yes | Don't block UI during data loading |
| Real-time search | Yes | Keep typing responsive during search |
| Mobile devices | Yes | Avoid UI jank on slower processors |

**Simple rule:** If the operation takes more than 50ms, use a worker.

---

### Quick Start

```bash
Expand Down Expand Up @@ -187,13 +245,15 @@ const best = matcher.find_best_agent("write a function");

1. [Why Edge-First?](#why-edge-first)
2. [Features](#features)
3. [Tutorial: Build Your First Swarm](#tutorial-build-your-first-swarm)
4. [P2P Transport Options](#p2p-transport-options)
5. [Free Infrastructure](#free-infrastructure-zero-cost-at-any-scale)
6. [Architecture](#architecture)
7. [API Reference](#api-reference)
8. [Performance](#performance)
9. [Security](#security)
3. [Consensus Modes](#consensus-modes-trusted-vs-open)
4. [Web Workers](#web-workers-keep-your-ui-responsive)
5. [Tutorial: Build Your First Swarm](#tutorial-build-your-first-swarm)
6. [P2P Transport Options](#p2p-transport-options)
7. [Free Infrastructure](#free-infrastructure-zero-cost-at-any-scale)
8. [Architecture](#architecture)
9. [API Reference](#api-reference)
10. [Performance](#performance)
11. [Security](#security)

---

Expand Down Expand Up @@ -1097,6 +1157,23 @@ comp.decompress(data)
comp.condition() // "excellent"|"good"|"poor"|"critical"
```

### WorkerPool (Web Workers)
```javascript
import { WorkerPool } from '@ruvector/edge/worker-pool';

const pool = new WorkerPool(workerUrl, wasmUrl, options);
await pool.init() // Start workers
pool.insert(vector, id, metadata) // Insert single vector
pool.insertBatch(entries) // Parallel batch insert
pool.search(query, k, filter) // Search k nearest
pool.searchBatch(queries, k) // Parallel multi-query
pool.delete(id) // Remove vector
pool.get(id) // Retrieve by ID
pool.len() // Count vectors
pool.getStats() // {poolSize, busyWorkers, ...}
pool.terminate() // Stop all workers
```

---

## Performance
Expand Down
53 changes: 53 additions & 0 deletions examples/edge/pkg/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -50,6 +50,7 @@ This library gives you everything you need to build distributed AI systems: cryp
| **Post-Quantum** | Hybrid signatures | Future-proof |
| **Neural Networks** | Spiking + STDP | Bio-inspired learning |
| **Compression** | Adaptive 4-32x | Network-aware |
| **Web Workers** | Worker pool | Parallel ops, non-blocking UI |

### What It Costs

Expand Down Expand Up @@ -84,6 +85,7 @@ RuVector provides a complete edge AI platform. This package (`@ruvector/edge`) i
│ ✓ Post-Quantum Crypto ✓ RVLite Vector DB (260KB) │
│ ✓ Spiking Neural Networks SQL + SPARQL + Cypher queries │
│ ✓ Adaptive Compression IndexedDB persistence │
│ ✓ Web Worker Pool │
│ │
│ Best for: ✓ SONA Neural Router (238KB) │
│ • Lightweight P2P apps Self-learning with LoRA │
Expand Down Expand Up @@ -156,6 +158,57 @@ gossipNode.join_swarm(relayUrl); // Eventually consistent, Byzantine-tolerant

---

### Web Workers: Keep the UI Responsive

Heavy operations (vector search, encryption, neural network inference) run in Web Workers to avoid blocking the main thread. The package includes a ready-to-use worker pool:

```javascript
import { WorkerPool } from '@ruvector/edge/worker-pool';

// Create worker pool (auto-detects CPU cores)
const pool = new WorkerPool(
new URL('@ruvector/edge/worker', import.meta.url),
new URL('@ruvector/edge/ruvector_edge_bg.wasm', import.meta.url),
{
poolSize: navigator.hardwareConcurrency,
dimensions: 384,
useHnsw: true
}
);

await pool.init();

// Operations run in parallel across workers
await pool.insert(embedding, 'doc-1', { title: 'Hello' });
await pool.insertBatch([
{ vector: emb1, id: 'doc-2' },
{ vector: emb2, id: 'doc-3' },
{ vector: emb3, id: 'doc-4' }
]);

// Search distributed across workers
const results = await pool.search(queryEmbedding, 10);

// Batch search (each query on different worker)
const batchResults = await pool.searchBatch([query1, query2, query3], 10);

// Pool statistics
console.log(pool.getStats());
// { poolSize: 8, busyWorkers: 2, idleWorkers: 6, pendingRequests: 0 }

// Clean up
pool.terminate();
```

**Worker Pool Features:**
- Round-robin task distribution with load balancing
- Automatic batch splitting across workers
- Promise-based API with 30s timeout
- Zero-copy transfers via transferable objects
- Works in browsers, Deno, and Cloudflare Workers

---

### Quick Start

```bash
Expand Down
Loading