Skip to content

Conversation

Copy link

Copilot AI commented Oct 18, 2025

Overview

This PR implements a complete YOLO-based computer vision system for the SUAS (Student Unmanned Aerial Systems) competition with full support for camera/video feed input, addressing the requirements specified in the issue.

What's New

Core Features

YOLODetector Class - A comprehensive detection system that supports:

  • Real-time camera detection: Process live feeds from USB cameras, webcams, or drone-mounted cameras
  • Video file processing: Analyze pre-recorded videos with progress tracking and output saving
  • Image detection: Process single images or batch image sets
from src.detector import YOLODetector

detector = YOLODetector(model_path='yolov8n.pt')
detector.detect_from_camera(camera_index=0)  # Live camera feed
detector.detect_from_video(video_path='flight.mp4')  # Video analysis
detector.detect_from_image(image_path='target.jpg')  # Image detection

Command-Line Interface - Unified entry point for all detection modes:

# Camera detection
python src/detector.py --source 0

# Video processing with output
python src/detector.py --source video.mp4 --save --output result

# Custom model and thresholds
python src/detector.py --source 0 --model yolov8m.pt --conf 0.3 --iou 0.5

Utility Modules

  • CameraHandler: Manages camera devices with enumeration, resolution configuration, and context manager support
  • VideoProcessor: Handles video file reading, seeking, and property extraction
  • VideoWriter: Saves processed output to video files
  • Configuration: SUAS-specific settings and detection parameters

Example Scripts

Three ready-to-use examples included:

  • examples/list_cameras.py - Discover available camera devices
  • examples/run_camera_detection.py - Quick camera detection demo
  • examples/run_video_detection.py - Video file processing example

Documentation

Comprehensive documentation for quick onboarding:

  • README.md: Complete guide with installation, usage, troubleshooting, and SUAS-specific notes
  • QUICKSTART.md: Get started in minutes with common use cases
  • test_installation.py: Verify installation and dependencies

Technical Details

Architecture

The system is built with modularity and maintainability in mind:

  • Separation of concerns (detection, camera handling, video processing)
  • Context managers for safe resource handling
  • Flexible input handling through a unified interface
  • Configurable detection parameters

Dependencies

Uses modern, actively-maintained libraries:

  • ultralytics (≥8.0.0) - YOLOv8 implementation
  • opencv-python (≥4.8.1.78) - Computer vision operations
  • torch (≥2.6.0) - Deep learning framework
  • numpy (≥1.24.0) - Numerical operations

All dependencies have been updated to patched versions to address known security vulnerabilities.

Security

  • CodeQL scan: 0 vulnerabilities in custom code
  • Dependencies: Updated to secure versions
    • opencv-python ≥4.8.1.78 (fixes CVE-2023-4863)
    • Pillow ≥10.2.0 (fixes libwebp and RCE vulnerabilities)
    • torch ≥2.6.0 (fixes heap buffer overflow and use-after-free vulnerabilities)

SUAS Competition Ready

The system is specifically designed for SUAS competition requirements:

  • Real-time detection for autonomous systems
  • Drone-mounted camera support via device indices
  • Pre-recorded flight analysis capabilities
  • Configurable detection classes for competition-specific objects
  • Performance optimization options (model selection, threshold tuning)

Testing

To test the implementation:

  1. Install dependencies:

    pip install -r requirements.txt
  2. Verify installation:

    python test_installation.py
  3. Test camera detection:

    python examples/list_cameras.py  # Find available cameras
    python src/detector.py --source 0  # Run detection

Project Structure

somars-vision/
├── src/
│   ├── detector.py          # Main YOLO detector (274 lines)
│   ├── config.py            # Configuration settings
│   └── utils/
│       ├── camera.py        # Camera handling (149 lines)
│       └── video.py         # Video processing (208 lines)
├── examples/                # Example scripts
├── data/                    # Data directory
├── weights/                 # Model weights directory
├── README.md                # Comprehensive documentation
├── QUICKSTART.md            # Quick start guide
├── requirements.txt         # Python dependencies
└── setup.py                 # Package installation

Requirements Compliance

✅ Running YOLO computer vision machine learning for SUAS competition
✅ Camera/video feed input support
✅ Pluggable input system (camera index, video files, images)

All requirements from the problem statement have been successfully implemented.

Original prompt

Running YOLO computer vision machine learning for the SUAS competition's mission statement
the computer vision algorithm should be able to have a camera/video feed input plugged into it


💡 You can make Copilot smarter by setting up custom instructions, customizing its development environment and configuring Model Context Protocol (MCP) servers. Learn more Copilot coding agent tips in the docs.

Copilot AI and others added 3 commits October 18, 2025 01:08
Co-authored-by: aborjigin <151589909+aborjigin@users.noreply.github.com>
Co-authored-by: aborjigin <151589909+aborjigin@users.noreply.github.com>
Co-authored-by: aborjigin <151589909+aborjigin@users.noreply.github.com>
Copilot AI changed the title [WIP] Implement YOLO algorithm for camera input Implement YOLO Computer Vision System with Camera/Video Feed Support for SUAS Competition Oct 18, 2025
Copilot AI requested a review from aborjigin October 18, 2025 01:13
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants