Live Deck: a call center dashboard for the TV wall
React 19 + Vite 7 + React Server Components, version 2.0.1, 9 widgets, 32×18 draggable grid, SSE streaming with zombie-connection detection
Overview
Live Deck is the wall-mounted dashboard the Moon Holidays call center watches all day. Version 2.0.1 as of now. It consumes live data from the Travel Panel core platform and from Aircall telephony, rendered through React Server Components serving a landing grid with Sales, Customer Service, and Marketing tiles — the Customer Service dashboard is the fully built one, with nine widgets streaming live data; the other two are reserved placeholders for future expansion.
It has to look crisp at 55 inches, never drop a frame, and never stop running.
The real problem
Off-the-shelf BI dashboards are built for desks, not for TVs. They assume someone is clicking around. A call center TV has to be glanceable, high-contrast, updated in real time, and ideally unattended for weeks. That last requirement turns out to be the hard one: the dashboard has to survive network blips, zombie SSE connections, tab suspension, and browser refresh loops without anyone walking into the room to fix it.
What got built
- Draggable 32×18 grid layouts for widget placement. Operators drag widgets into position once, save the profile, and forget. Layout migration handles changes between versions without losing user configurations.
- Multi-profile settings per user, so different shift leaders can have different layouts.
- Real-time WebSocket streaming of Aircall call data (
VITE_WSS_URL), combined with historical data loaded on mount via TanStack Query. - SSE push for server-initiated updates at
GET /api/v1/events, with zombie-connection detection using namedheartbeatevents (not:pingcomments) at a 30-second interval so the client stall detector can surface connection death to the reconnect logic. - TV and kiosk optimization: spatial navigation (D-pad friendly), auto-rotation, branded landing page with live server status indicators,
VITE_ENABLE_SETTINGS_UI=falsefor kiosk mode to hide editing controls. - Per-widget configuration, each card tuned independently.
- Content-aware RTL because the dashboard runs in offices that work in both Hebrew and English.
- Demo mode without Firebase — set any credentials and the app boots to show the landing grid, useful for on-site demos without the production identity provider.
Tech stack
React 19, TypeScript, Vite 7 with React Server Components (entry.rsc.tsx → entry.ssr.tsx → entry.browser.tsx pipeline). TanStack Query v5 for server state. Context API for Auth, Data, SSE, Theme, Settings, and Permissions. Zod for runtime validation. MUI theme (dark/light, presets). Firebase OAuth with optional domain allowlist for access control.
Settings backend: a custom HTTP server on port 3001 writing to SQLite. Three DB_MODE variants — local (./server/data/settings.db), docker (/data/settings.db), and prod (/data/prod/settings.db). Bind-mount from local disk or block storage only — no NFS/CIFS (WAL corruption guaranteed).
Proxy architecture: all external API calls are proxied through the server. DATA_FETCH_URL and DATA_FETCH_AUTH stay on the server. The client holds only a short-lived Firebase JWT that rotates automatically. API credentials never reach the browser.
SSE endpoint hardening
The SSE route is the only endpoint that cannot accept a Bearer header because the browser EventSource API does not allow custom headers. I did not want to leave it unprotected, so I designed six layers of defense:
- No identifiers on the wire. Broadcast payloads are pure invalidation signals —
{}for every event type. Firebase UIDs used to flow throughsettings:changed/viewTransition:changedpayloads; they are now stripped byeventPublisher.tsbefore publish. - Origin allow-list. Browsers sending an
Originheader must matchCORS_ORIGINor get 403 before a connection slot is consumed. - Strict response CSP.
default-src 'none'; frame-ancestors 'none'plusX-Content-Type-Options: nosniff. - Connection limits: per-IP (
SSE_MAX_CONNECTIONS_PER_IP, default 10) and global (SSE_MAX_CONNECTIONS_TOTAL, default 1000) with a throttled warn log at 80% of the global cap so ops can react before the 503 cliff. - Optional single-use ticket auth (off by default). When
SSE_REQUIRE_TICKET=true, clients must firstPOST /api/v1/events/ticketwith a Firebase JWT to obtain a 30-second single-use ticket, then open the EventSource with?ticket=<id>. Reconnect requires a fresh ticket. - Rate limiter: the global 100 req/min per-IP rate limiter applies to this route, so reconnect storms from a single IP are bounded.
Performance tuning
- WebSocket batch window of 50ms for non-critical events — bundling multiple updates into a single render pass. Critical events (
snapshot,call.*,user.*) bypass the timer and flush on the next microtask. - SSE jitter up to 2000ms to mitigate thundering-herd reconnects.
- Optional L1 cache trust window (
PERF_CACHE_TRUST_TTL_S) to reduce database lookups during high-frequency settings reads. - Optional UNION ALL query batching to collapse 46+ settings queries into ~12.
- Optional connection-age recycling (
SSE_MAX_CONNECTION_AGE_MS) that force-closes connections after a configurable age and lets the client auto-reconnect withLast-Event-IDreplay. Recommended production value for long-lived kiosk deployments: 1 hour.
Permissions
Single permission key liveDeck with three flags: Read, Edit, Remove. Server enforces via requirePermission() middleware on every route. Default-deny on any failure. Client hides UI elements the user cannot use.
Docker
- Production: 2-stage Docker build. Stage 1 builds the Vite client bundle with
VITE_*env vars baked in. Stage 2 runs Node serving API + SSE + RSC-rendered frontend on a single port. No.envin the final image. JWT verification uses Firebase public JWKS, not a service account key. - Development: no build step. Source code bind-mounted, Vite HMR + tsx watch, node_modules as a named volume to avoid host/container platform mismatch.
- Health checks:
/api/v1/healthand/api/v1/health/ready(with database readiness). Docker health check runs every 30 seconds.
What I learned
Dashboards displayed on TVs are fundamentally different from dashboards used on desktops. Every assumption about user interaction is wrong. The important questions are "does it survive overnight", "does it degrade gracefully when data stops arriving", and "does a random network blip require someone to walk into the call center and click refresh". The answer to the last one has to be no.
I ended up writing more reconnection, zombie-detection, and SSE-hardening code than actual chart code. That is normal for unattended production systems, and it is exactly the boring work that decides whether the thing stays running through the night.
License: Proprietary, Moon Holidays Co., Ltd. All rights reserved.
