Todd Ludington
← Back to projects

HamCation Event Technology Platform

Active since 2026-01

A suite of interconnected web apps powering volunteer operations for one of the largest amateur radio conventions in the country.

Cloudflare Workers D1 Pages KV TypeScript Python FastAPI Raspberry Pi Ansible Prometheus Grafana Docker

Background

HamCation is one of the largest amateur radio conventions in the United States, held annually in Orlando, Florida. It draws thousands of attendees and relies on hundreds of volunteers to run everything from hospitality and security to prize drawings and vendor coordination. I volunteer as part of the IT committee, building the systems that keep the event running.

What started as a single app to replace paper meal tickets grew into a full technology platform spanning eight interconnected applications, a monitoring stack, and fleet-managed hardware.

What I built

The platform runs entirely on Cloudflare’s edge infrastructure — Workers for API logic, Pages for static frontends, D1 for data, KV for caching, and Cloudflare Access for authentication tied to Google Workspace groups. Each system is its own project with its own git repo, but they share a common D1 database, audit log, and architectural patterns.

Meals — Real-time meal eligibility scanning. Volunteers scan badges to check if someone is eligible for a meal that day. Supports offline operation with cached eligibility snapshots for when event WiFi gets unreliable. Role-based overrides for edge cases.

Security — Badge scanning at security gates with offline-first architecture. Uses IndexedDB and a service worker to cache the full roster locally so scanning continues even without a network connection. Background sync pushes queued scans when connectivity returns.

Badge Printer — Cloud-to-hardware badge printing. A Cloudflare Worker manages a print queue while Raspberry Pi agents running Python poll for jobs and drive local printers via CUPS. Agents authenticate with Cloudflare Access service tokens. Stuck print jobs auto-release after five minutes.

Volunteers — Multi-tier hours submission and approval workflow. Volunteers submit hours through a self-service portal, chairs review and approve, coordinators do a second pass, and HamCation leadership does final review. Integrates with the ProPublica Nonprofit Explorer API to verify 501(c)(3) organizations and the FCC ULS database to validate amateur radio callsigns.

Prizes — Prize drawing management with FCC callsign lookup for winner data enrichment. Syncs prize inventory from a MySQL database (managed by the website team), runs drawings through D1, and writes winners back to MySQL on finalization. Invalidates KV cache entries so display kiosks update in near real-time.

Display Kiosks — A fleet of four Raspberry Pi devices running Python/FastAPI clients in Chromium kiosk mode. They show rotating prize info, event schedules, vendor listings, and live winner announcements via Server-Sent Events. A separate Cloudflare Workers API serves the data with tiered KV caching. The entire Pi fleet is provisioned and managed with Ansible playbooks that handle Tailscale VPN, systemd services, boot ordering, and automatic recovery after power loss.

Monitoring — A Docker-based Prometheus and Grafana stack scraping metrics from every Worker via /api/metrics endpoints. All apps expose business-specific counters (scans processed, meals served, badges printed) alongside standard request metrics. Uptime Kuma watches service health, and Loki aggregates logs.

How it fits together

Every app writes to a shared audit log in D1, tagged by source application. Cloudflare Access handles authentication across all apps using Google Workspace groups — adding a volunteer to a Google Group automatically grants them access to the right tools. The monitoring stack ties it all together with cross-app dashboards.

The systems are designed for a specific constraint: they have to work reliably during a live three-day event with hundreds of concurrent users, inconsistent WiFi, and no tolerance for downtime. That drives most of the architectural decisions — offline caching, graceful degradation, auto-recovery, and comprehensive audit trails so issues can be traced after the fact.

Technical challenges

Offline reliability — Event venues have unpredictable network conditions. The security and meals scanners use IndexedDB, service workers, and cached snapshots to keep working without connectivity. Scans queue locally and sync when the network returns.

Hardware integration — The badge printer and display kiosk systems bridge cloud services with physical Raspberry Pi devices. The printer agents poll for work rather than requiring inbound connections, which simplifies networking. The kiosk fleet needs to survive power outages and boot back to a working state without manual intervention.

Cross-browser support — The apps run on laptops, phones, and Amazon Kindle Fire tablets used as kiosks. Supporting Chrome, Safari, and Amazon Silk means no cutting-edge browser APIs and careful attention to date parsing, IndexedDB quirks, and CSS compatibility.

D1 write budgets — Cloudflare D1 has daily write limits on the free tier. High-frequency polling endpoints (health checks, metrics scrapes, agent heartbeats) skip audit log writes to stay within budget. Dynamic queries with large IN clauses batch into chunks of 80 to stay under the 100 bound parameter limit.

Why it matters

This project shows what a small volunteer team can build with modern edge infrastructure. The entire platform runs on Cloudflare’s free and low-cost tiers, costs almost nothing to operate, and handles real production load during a major annual event.

It also represents the kind of work I find most satisfying — building practical systems that solve real problems for real people, with enough engineering discipline to be reliable when it counts.

Apps in this platform

Related projects

All projects →