Discer.io is an interactive platform that teaches agentic AI concepts through a multiplayer battle royale game. Build AI agents using a visual block-based programming interface, deploy them into a live combat arena, and watch them make real-time decisions using large language models. Created for HackPrinceton Fall 2025.
A React-based drag-and-drop environment where users design agent behavior without writing code. Agents are composed of:
- Action Blocks: Entry points (onStart, onAttacked)
- Agent Blocks: LLM decision points with system/user prompts
- Tool Blocks: Game actions (move, attack, collect, switch_weapon, plan, search)
FastAPI server managing agent execution with dual LLM provider support:
- Daedalus Mode: Multi-provider access (OpenAI, Anthropic, Google) with MCP server integration
- OpenAI Mode: Direct API calls for lower latency
- Parallel agent execution with configurable step delays
- Action history tracking and plan persistence
Together, our Visual Programming Interface and Agent Orchestration Backend power our Guided Learning Platform and Creative Multiplayer Game.
Learn agentic AI step-by-step with an integrated curriculum:
- Guided lessons and challenges with a visual, no-code block builder
- Lesson roadmap to track progress and reinforce core concepts
- Real-time execution visualization and actionable hints
- Design, orchestrate, and deploy agentic workflows
Turn ideas into playable agents and watch emergent strategies unfold:
- Build unique behaviors and deploy into a live, physics-driven arena
- Experiment rapidly: tweak prompts, tools, and plans mid-session
- Designed for creativity-first gameplay that teaches by doing
Real-time multiplayer battle arena built with TypeScript/Bun:
- Physics-based bullet collision system
- Weapon mechanics (pistols, rifles, shotguns, melee)
- Resource management (ammo, health, XP)
- Dynamic obstacle destruction
- WebSocket-based multiplayer
Visual Agent Programming
- Scratch-like block interface with live execution visualization
- No coding required - design complex agent behaviors through connections
- Real-time feedback showing which blocks are actively executing
LLM-Powered Decision Making
- Agents use GPT-4, Claude, or Gemini to make tactical decisions
- Game state fed to LLM with nearby agents, loot, obstacles, and ammo status
- Supports strategic planning via MCP server integration (Daedalus mode only)
Combat Mechanics
- Attack action fires 2 shots with 200ms delay between shots
- Weapon-dependent bullet count: pistol/rifle (1 bullet/shot), shotgun (8 pellets/shot)
- Melee combat with fists when ammunition depleted
- Obstacle line-of-sight blocking and destructible cover
Resource Scarcity
- Limited ammo creates strategic tension
- Universal ammo system shared across all firearms
- Weapon switching between slots for tactical flexibility
- Node.js 18+
- Python 3.10+
- Bun runtime
- OpenAI API key and/or Daedalus API key
1. Game Environment
cd game_environment
bun install
bun run dev # Client: http://localhost:3000, Server API: http://localhost:80002. Backend
cd backend
# Option A (recommended): using uv
uv sync
uv run python main.py # http://localhost:8001
# Option B: using pip
pip install -r requirements.txt
python main.py # http://localhost:8001Create a .env file in backend/ with at least:
# Choose one provider: daedalus (with MCP support) or openai
LLM_PROVIDER=daedalus
# If using Daedalus (multi-provider + MCP)
DEDALUS_API_KEY=your_dedalus_key
DEFAULT_MODEL=openai/gpt-4o-mini
# If using OpenAI directly (no MCP)
# OPENAI_API_KEY=your_openai_key
# DEFAULT_MODEL=gpt-4o-mini
# Optional tuning
STEP_DELAY=6.0
LLM_TIMEOUT=5.03. Frontend
cd frontend
npm install
npm run dev # http://localhost:3001The backend supports two LLM providers via .env:
Daedalus Labs (Default)
LLM_PROVIDER=daedalus
DEDALUS_API_KEY=your_key_here
DEFAULT_MODEL=openai/gpt-4o-miniSupports multiple providers and MCP servers for planning/search tools.
OpenAI Direct
LLM_PROVIDER=openai
OPENAI_API_KEY=your_key_here
DEFAULT_MODEL=gpt-4o-miniFaster and simpler, but OpenAI models only (no MCP support).
- Design Agent: In the frontend (http://localhost:3001), drag blocks to define behaviors.
- Deploy: Click "Add Agent" to register the program in the backend.
- Start Game: Ensure the game environment is running (client http://localhost:3000, server http://localhost:8000).
- Auto-Step: Auto-stepping starts automatically when agents are registered; you can also manage it via the API (
/start-auto-stepping,/stop-auto-stepping). Interval is controlled bySTEP_DELAY. - Watch: Observe agents making LLM-powered decisions in real-time on the game client.
Frontend (Blocks) β Backend (LLM Inference) β Game Environment (Physics)
β β β
Visual Design Decision Making Combat Simulation
β β β
ββββββββββββββββββββββ΄ββββββββββββββββββββββββββ
(Continuous Loop)
Each game step:
- Backend fetches game state for all agents
- LLM processes state and decides action (move/attack/collect/switch_weapon)
- Actions sent to game environment simultaneously
- Game physics updates, state changes
- Repeat every
STEP_DELAYseconds
- One attack action = 2 bullets fired (enforced in
agentBridge.ts) - Minimum 200ms between individual shots
- Attack state persists for 3 seconds or until target changes
- Weapon fire delay respected (pistol: 150ms, rifle: 100ms, shotgun: 900ms)
- Auto-reload when magazine empty
- Total bullets per attack:
- Pistol/Rifle: 2 bullets
- Shotgun: 16 pellets (2 shots Γ 8 pellets)
- Fists: 2 melee swings
hack_princeton_F25/
βββ frontend/ # React visual programming interface
β βββ src/
β β βββ components/ # Block, Connection, Canvas components
β β βββ App.jsx # Main application
βββ backend/ # FastAPI agent orchestration
β βββ main.py # Server with dual LLM provider support
β βββ game_client.py # Game environment HTTP client
β βββ (configure via .env)
βββ game_environment/ # TypeScript/Bun game server
β βββ server/src/
β β βββ agentBridge.ts # Backend action β game input translation
β β βββ objects/ # Player, AIAgent, Bullet classes
β β βββ game.ts # Main game loop
β βββ client/ # Browser-based game renderer
β βββ common/ # Shared definitions (guns, packets)
βββ readme_Images/ # Documentation assets
Backend (http://localhost:8001)
POST /add-agent- Register agent programPOST /register-agents-in-game- Spawn agents in arenaPOST /execute-game-step- Execute one decision cyclePOST /start-auto-stepping- Enable continuous executionPOST /stop-auto-stepping- Stop continuous executionPOST /cleanup-game-session- Remove agents and optionally stop auto-steppingGET /agents-state- Current execution node per agentGET /list-agents- List agents and summary stateGET /game-session-status- Current game state
Game Environment (http://localhost:8000)
POST /api/agent/register- Create AI agent in gameGET /api/agent/state/{agent_id}- Get game statePOST /api/agent/command- Send action (move/attack/collect)
Frontend: Next.js 14, React 18, Tailwind CSS, Lucide Icons Backend: FastAPI, Pydantic, OpenAI SDK, Daedalus SDK, AsyncIO Game: TypeScript, Bun, WebSocket, Canvas API AI: OpenAI GPT-4o, Anthropic Claude, Google Gemini (via Daedalus)
Game environment inspired by Suroi - an open-source 2D battle royale. Special thanks to the Suroi team for demonstrating excellent multiplayer game architecture.
Built for HackPrinceton Fall 2025.
MIT





