Travel Panel WebSocket Server: real-time at uWebSockets.js speed
High-performance WebSocket gateway for notifications, chat, and live state, running on port 8965 since August 2025
Overview
Travel Panel needed live updates. New bookings had to appear in operator tabs the moment they were created. Chat messages between staff had to feel instant. System alerts had to reach every active session across the whole platform. I built the WebSocket server that powers all of that, running on port 8965 in every environment since August 2025.
Why uWebSockets.js
The obvious choice is ws or socket.io. They are easy, they are popular, and they are fine for small loads. They are also ten to fifteen times slower than uWebSockets.js under serious traffic. For a system that has to serve every active Moon Holidays user session at once, raw throughput matters. uWebSockets.js is a C++ WebSocket server with Node bindings, and it is the fastest thing on the JavaScript runtime.
Architecture
uWebSockets.js as the transport, wrapped in a thin TypeScript layer for application logic.
MongoDB for persistence of delivered messages, chat history, and subscription state. A single MongoDB URL covers both databases the service needs.
Redis for pub/sub and fan-out across multiple WebSocket server processes. Any backend service in the Moon Holidays platform can publish to Redis and every connected client sees the message.
Firebase Admin SDK for handshake authentication: every socket verifies its token before it is allowed to subscribe. Clients connect with ws://host:8965/ws/{firebase-token}.
Tenant-scoped fan-out enforced at the gateway layer, so a message for one tenant never leaks to another.
API surface
HTTP endpoints alongside the WebSocket transport:
GET /api/auth/me— current userGET /api/auth/verify— verify tokenGET /api/users/online— list online usersGET /api/users/search?q=— search usersGET /api/users/by-email/:email— lookup by emailPOST /api/chat/send— send messageGET /api/chat/history— message historyGET /api/chat/rooms— user's roomsPOST /api/chat/messages/mark-read— mark messages as readPOST /api/notifications— send notificationGET /api/notificationsand/unread-countGET /health,GET /metrics
WebSocket actions
The message envelope is {action, data}. Supported actions:
chat— send chat messagenotification— send notificationpresence— update presence statusping/pong— keep-aliveget_connected_users— list online usersfetch_unacknowledged_notifications— retrieve pending notificationsacknowledge_notification— mark as read
Rate limits and connection ceilings
Rate limits differ by environment because dev workloads need headroom for testing while production needs abuse protection:
| Setting | Dev | Production |
|---|---|---|
| Messages per minute | 1000 | 120 |
| Connection attempts per minute | 500 | 200 |
| Max connections per user | 10 | 10 |
| Idle timeout (seconds) | 120 | 120 |
Dev vs production
Dev connects to an external Redis from the travelpanel-fastapi Docker network for shared state with the rest of the platform, hot-reloads via tsx watch, and runs with LOG_LEVEL=debug. Production uses a bundled Redis container with persistence, compiled JS, and LOG_LEVEL=info.
Project structure
uws-server/
├── src/
│ ├── index.ts # Entry point
│ ├── app.ts # App setup
│ ├── config.ts # Configuration
│ ├── handlers/ # HTTP & WebSocket handlers
│ ├── services/ # Business logic
│ ├── database/ # MongoDB & Redis
│ └── middleware/ # Rate limiting, etc.
├── tests/ # Unit tests
├── Dockerfile # Production image
└── Dockerfile.dev # Development image
Production reality
The server has been running in production since launch. Uptime is effectively continuous. Occasional Redis version bumps, a handful of TypeScript refactors as the feature set grew, and very little else. The fastest way I know to end up with a reliable system is to start with a small number of correct primitives and resist the urge to decorate them.
Tech stack
Node.js 20 LTS, TypeScript, uWebSockets.js, MongoDB, Redis, Firebase Admin SDK, Docker, Docker Compose.
Takeaway
Real-time is less about "how fast can you push bytes" and more about "what happens when things go wrong". Reconnection, message replay, authenticated handshakes, pub/sub across pods, graceful shutdown, rate limiting, per-user connection ceilings, idle timeouts. If all of those work, the WebSocket bit takes care of itself.
