A WebRTC-based video conferencing application that serves as a testing laboratory for multiple AI-powered virtual background segmentation technologies. Compare and evaluate SAM2 (Segment Anything Model 2), BodyPix, MediaPipe, and WebGL models in real-time.
Try the application online: https://segmentation-lab.onrender.com/
- 🧪 Test and compare multiple AI segmentation models in real-time
- 📊 Detailed performance metrics for each model
- 🔄 Easy switching between models during a call
- 🎥 Real-time video conferencing using WebRTC
- 🔗 Easy meeting creation and joining with shareable meeting codes
- 🖼️ Multiple background options with support for:
- MediaPipe Selfie Segmentation (currently available)
- SAM2 (Segment Anything Model 2) - coming soon
- TensorFlow BodyPix - coming soon
- WebGL-based segmentation - coming soon
- 🏝️ Built-in background images (beach, office) and custom background upload
- 🎛️ Audio/video controls
- Node.js (16+)
- Modern web browser with WebRTC support (Chrome, Firefox, Edge, Safari)
-
Clone the repository:
git clone https://github.com/Ketan-K/segmentation-lab.git cd segmentation-lab -
Install dependencies:
npm install -
Start the server:
npm start -
Open your browser and navigate to:
http://localhost:3000
This project includes a .gitignore file that excludes:
- Node.js dependencies (
node_modules) - Environment files (
.env, etc.) - Build artifacts
- Log files
- Editor-specific files
- Cache files
For development:
- Fork or clone the repository
- Run
npm installto install dependencies - Make your changes
- Test locally with
npm start - Submit a pull request with your improvements
- Click "Create Meeting" on the home page
- Grant camera and microphone permissions when prompted
- Share the generated meeting code with others
- Enter the meeting code in the "Join Meeting" input field
- Click "Join Meeting"
- Grant camera and microphone permissions when prompted
- During a call, click "Virtual Background" button
- Select a segmentation model from the dropdown (MediaPipe is recommended for most devices)
- Choose a background type:
- None (original background)
- Blur (blurred background)
- Beach
- Office
- Custom (upload your own image)
- Uses Google's MediaPipe Selfie Segmentation
- Good balance of performance and quality
- Works well on most devices
- Currently the only implemented model
- Based on Meta's Segment Anything Model 2
- Highest quality segmentation
- More resource-intensive
- TensorFlow.js-based segmentation
- Reasonable quality
- Higher resource usage
- Custom WebGL-based segmentation
- Fastest performance
- Lower quality than other models
The application provides real-time performance metrics for each segmentation model:
- FPS (Frames Per Second)
- Segmentation Time (ms)
- Frame Processing Time (ms)
This allows you to compare the efficiency of different models on your device.
segmentation-lab/
├── app.js # Main client-side application logic
├── index.html # Main HTML file
├── package.json # Project dependencies
├── server.js # WebRTC signaling server
├── styles.css # Application styles
├── .gitignore # Git ignore patterns
├── assets/ # Background images
│ ├── beach.png
│ └── office.png
├── js/ # JavaScript modules
│ ├── components/ # UI components
│ │ ├── eventHandlers.js # Event handling logic
│ │ ├── performanceMetrics.js # Performance measuring utilities
│ │ └── uiController.js # UI manipulation functions
│ ├── services/ # Services
│ │ ├── backgroundService.js # Background effects processing
│ │ ├── socketService.js # Socket.io communication
│ │ └── webrtcService.js # WebRTC connection management
│ └── utils/ # Utility functions
│ ├── alertUtils.js # Alert/notification utilities
│ └── generalUtils.js # General helper functions
└── models/ # Segmentation model implementations
├── BaseBackgroundModel.js # Base model class
└── MediaPipeModel.js # MediaPipe integration
This project is licensed under the MIT License - see the LICENSE file for details.