Skip to main content
George Khananaev
Case Study

Live Deck: a call center dashboard for the TV wall

React 19 + Vite 7 + React Server Components, version 2.0.1, 9 widgets, 32×18 draggable grid, SSE streaming with zombie-connection detection

5 min read
Moon HolidaysJan 2026 — PresentHead of Development & IT Infrastructure

Overview

Live Deck is the wall-mounted dashboard the Moon Holidays call center watches all day. Version 2.0.1 as of now. It consumes live data from the Travel Panel core platform and from Aircall telephony, rendered through React Server Components serving a landing grid with Sales, Customer Service, and Marketing tiles — the Customer Service dashboard is the fully built one, with nine widgets streaming live data; the other two are reserved placeholders for future expansion.

It has to look crisp at 55 inches, never drop a frame, and never stop running.

The real problem

Off-the-shelf BI dashboards are built for desks, not for TVs. They assume someone is clicking around. A call center TV has to be glanceable, high-contrast, updated in real time, and ideally unattended for weeks. That last requirement turns out to be the hard one: the dashboard has to survive network blips, zombie SSE connections, tab suspension, and browser refresh loops without anyone walking into the room to fix it.

What got built

  • Draggable 32×18 grid layouts for widget placement. Operators drag widgets into position once, save the profile, and forget. Layout migration handles changes between versions without losing user configurations.
  • Multi-profile settings per user, so different shift leaders can have different layouts.
  • Real-time WebSocket streaming of Aircall call data (VITE_WSS_URL), combined with historical data loaded on mount via TanStack Query.
  • SSE push for server-initiated updates at GET /api/v1/events, with zombie-connection detection using named heartbeat events (not :ping comments) at a 30-second interval so the client stall detector can surface connection death to the reconnect logic.
  • TV and kiosk optimization: spatial navigation (D-pad friendly), auto-rotation, branded landing page with live server status indicators, VITE_ENABLE_SETTINGS_UI=false for kiosk mode to hide editing controls.
  • Per-widget configuration, each card tuned independently.
  • Content-aware RTL because the dashboard runs in offices that work in both Hebrew and English.
  • Demo mode without Firebase — set any credentials and the app boots to show the landing grid, useful for on-site demos without the production identity provider.

Tech stack

React 19, TypeScript, Vite 7 with React Server Components (entry.rsc.tsx → entry.ssr.tsx → entry.browser.tsx pipeline). TanStack Query v5 for server state. Context API for Auth, Data, SSE, Theme, Settings, and Permissions. Zod for runtime validation. MUI theme (dark/light, presets). Firebase OAuth with optional domain allowlist for access control.

Settings backend: a custom HTTP server on port 3001 writing to SQLite. Three DB_MODE variants — local (./server/data/settings.db), docker (/data/settings.db), and prod (/data/prod/settings.db). Bind-mount from local disk or block storage only — no NFS/CIFS (WAL corruption guaranteed).

Proxy architecture: all external API calls are proxied through the server. DATA_FETCH_URL and DATA_FETCH_AUTH stay on the server. The client holds only a short-lived Firebase JWT that rotates automatically. API credentials never reach the browser.

SSE endpoint hardening

The SSE route is the only endpoint that cannot accept a Bearer header because the browser EventSource API does not allow custom headers. I did not want to leave it unprotected, so I designed six layers of defense:

  1. No identifiers on the wire. Broadcast payloads are pure invalidation signals — {} for every event type. Firebase UIDs used to flow through settings:changed / viewTransition:changed payloads; they are now stripped by eventPublisher.ts before publish.
  2. Origin allow-list. Browsers sending an Origin header must match CORS_ORIGIN or get 403 before a connection slot is consumed.
  3. Strict response CSP. default-src 'none'; frame-ancestors 'none' plus X-Content-Type-Options: nosniff.
  4. Connection limits: per-IP (SSE_MAX_CONNECTIONS_PER_IP, default 10) and global (SSE_MAX_CONNECTIONS_TOTAL, default 1000) with a throttled warn log at 80% of the global cap so ops can react before the 503 cliff.
  5. Optional single-use ticket auth (off by default). When SSE_REQUIRE_TICKET=true, clients must first POST /api/v1/events/ticket with a Firebase JWT to obtain a 30-second single-use ticket, then open the EventSource with ?ticket=<id>. Reconnect requires a fresh ticket.
  6. Rate limiter: the global 100 req/min per-IP rate limiter applies to this route, so reconnect storms from a single IP are bounded.

Performance tuning

  • WebSocket batch window of 50ms for non-critical events — bundling multiple updates into a single render pass. Critical events (snapshot, call.*, user.*) bypass the timer and flush on the next microtask.
  • SSE jitter up to 2000ms to mitigate thundering-herd reconnects.
  • Optional L1 cache trust window (PERF_CACHE_TRUST_TTL_S) to reduce database lookups during high-frequency settings reads.
  • Optional UNION ALL query batching to collapse 46+ settings queries into ~12.
  • Optional connection-age recycling (SSE_MAX_CONNECTION_AGE_MS) that force-closes connections after a configurable age and lets the client auto-reconnect with Last-Event-ID replay. Recommended production value for long-lived kiosk deployments: 1 hour.

Permissions

Single permission key liveDeck with three flags: Read, Edit, Remove. Server enforces via requirePermission() middleware on every route. Default-deny on any failure. Client hides UI elements the user cannot use.

Docker

  • Production: 2-stage Docker build. Stage 1 builds the Vite client bundle with VITE_* env vars baked in. Stage 2 runs Node serving API + SSE + RSC-rendered frontend on a single port. No .env in the final image. JWT verification uses Firebase public JWKS, not a service account key.
  • Development: no build step. Source code bind-mounted, Vite HMR + tsx watch, node_modules as a named volume to avoid host/container platform mismatch.
  • Health checks: /api/v1/health and /api/v1/health/ready (with database readiness). Docker health check runs every 30 seconds.

What I learned

Dashboards displayed on TVs are fundamentally different from dashboards used on desktops. Every assumption about user interaction is wrong. The important questions are "does it survive overnight", "does it degrade gracefully when data stops arriving", and "does a random network blip require someone to walk into the call center and click refresh". The answer to the last one has to be no.

I ended up writing more reconnection, zombie-detection, and SSE-hardening code than actual chart code. That is normal for unattended production systems, and it is exactly the boring work that decides whether the thing stays running through the night.

License: Proprietary, Moon Holidays Co., Ltd. All rights reserved.

Working on something similar?

I take on a handful of engagements at a time: architecture reviews, platform rescues, AI integration, and fractional technical leadership. The clearer the brief, the faster the reply.

How I work
Travel Panel WebSocket Server: real-time at uWebSockets.js speed
Travel Panel WebSocket Server: real-time at uWebSockets.js speed illustrationFeaturedMoon Holidays
3 minAug 2025 — Present

Travel Panel WebSocket Server: real-time at uWebSockets.js speed

High-performance WebSocket gateway for notifications, chat, and live state, running on port 8965 since August 2025

A high-throughput WebSocket server powering every live update across the Moon Holidays platform. Built on uWebSockets.js for raw performance, with MongoDB for persistence, Redis for pub/sub across pods, Firebase Auth for handshakes, and rate limits of 1000 msg/min in dev and 120 msg/min in production.

websocketuwebsocketsnodejstypescript
TravelOffer: a multi-brand travel booking platform
TravelOffer: a multi-brand travel booking platform illustrationFeaturedMoon Holidays
4 minJul 2025 — Present

TravelOffer: a multi-brand travel booking platform

Next.js 16.1 + MongoDB booking flow with trilingual RTL/LTR support, state-machine order flow, and Stripe payments

A production Next.js 16 booking platform serving multiple brands under one codebase. Trilingual (English, Arabic, Hebrew) with full RTL/LTR support, cookie-based brand switching, a six-layer architecture, 30+ currency symbols, SMS/WhatsApp/Email/Google OAuth login, and a state-machine order flow from confirm to payment to completed.

nextjstypescriptmongodbstripe
StaySync: B2B hotel allotment management
StaySync: B2B hotel allotment management illustrationMoon Holidays
5 minFeb 2026 — Present

StaySync: B2B hotel allotment management

NestJS 11 + Drizzle + PostgreSQL 17 platform with 12 modules, 532 tests (99.63% allotment coverage), and a Contracts → Resolver → PoolBudget pipeline

StaySync is the B2B hotel allotment platform: allotment calendars, hardblocks with amendment history, booking orders, advisory-locked availability checks, and real-time sync with the Moon Holidays supplier API. Built on NestJS 11 with Drizzle ORM, PostgreSQL 17, Firebase auth, 12 business modules, 17 test suites, 532 tests, and two Next.js frontends consuming the API.

nestjstypescriptpostgresqldrizzle