StaySync: B2B hotel allotment management
NestJS 11 + Drizzle + PostgreSQL 17 platform with 12 modules, 532 tests (99.63% allotment coverage), and a Contracts → Resolver → PoolBudget pipeline
Overview
StaySync is the B2B hotel allotment platform, built as a standalone service that integrates with the Travel Panel core platform for inventory and availability. Room blocks (hardblocks), booking orders, allotment calendars, and real-time sync with the upstream Moon Holidays supplier API. This is the software that keeps hotel inventory accurate across dozens of partners and prevents the nightmare scenario of selling a room that has already been allocated elsewhere.
The problem
Hotel allotment management sounds simple. You have N rooms, you allocate them across M partners, you track what is sold versus what is available. The complexity is everywhere around those basics:
- Bookings expire after a configurable window (default 15 minutes in
BOOKING_EXPIRY_MINUTES). Expired bookings return their inventory to the pool, but only if no downstream confirmation has landed in the meantime. - Hardblocks reserve inventory with different rules from sold bookings. They can be amended, released, or partially returned.
- Upstream suppliers return different room counts at different times. Reconciliation has to absorb noise without corrupting the local view.
- Partners each want their own scoped view of inventory. Role-scoped APIs are not a bolt-on; they are the primary access model.
- Every number has to reconcile against the single source of truth at the hotel supplier API.
Architecture
Core business modules
src/modules/
├── agency/ # Travel agency management
├── allotment/ # Calendar resolution pipeline
├── allotment-core/ # Shared allotment logic (breaks circular deps)
├── allotment-event/ # Allotment event CRUD
├── reservation/ # Bookings + sale orders
├── hardblock/ # Room block lifecycle
├── external-data/ # MHG API sync / ETL
├── webhook/ # External event webhooks
├── contract-search/ # Contract text search
├── sse/ # Server-Sent Events
├── auth/ # Firebase auth guard (global)
└── health/ # Health probe
The allotment resolution pipeline
Allotment calendars are not stored as flat numbers. They are resolved on demand through a pipeline:
Contracts → Resolver → EventApplier → PoolBudget → per-day availability
Each contract from the upstream supplier feeds the resolver, which applies every allotment event on top (INCREASE_ALLOTMENT, REDUCE_ALLOTMENT, BLOCK_CONTRACT, BLOCK_AMENDMENT, RATE_CHANGE), then runs the PoolBudget calculation to produce per-day availability for each room type. The output is consistent regardless of how the data was loaded or in what order the events arrived.
Hardblock module
Hardblocks are the most feature-dense module. The operations it exposes:
- Create / amend / release
- Auto-release scheduling
- Slot-level inventory grid
- Bulk operations (bulk release, bulk extend)
- Amendment preview state, amendment diff grid, partial amendment overlay
- Revert amendment
- Inventory for date ranges
- Reserve rooms from block
- Release day inventory
- Cell detail with bookings and pricing
Every write touches the audit log, so there is always a trail back to who did what when.
Reservation module
Composite: BookingService, SaleOrderService, AllotmentAvailabilityService. The critical path is POST /bookings, which does an availability check inside an advisory lock. Without the lock, two partners clicking "Book" at the same millisecond would both succeed and the overbooking would only be caught later. The advisory lock serializes the critical section at the Postgres level so only one booking for the same resource can progress at a time.
Confirming a booking promotes it to a sale order through PATCH /bookings/:groupId/confirm. Cancelling releases the inventory back to the pool. The whole lifecycle is audited.
Database
PostgreSQL 17 via Docker. Schema managed with drizzle-kit push (no migration files), so schema changes are declarative and checked into git. Tables:
agenciesallotment_eventsbookings+booking_audit_logsale_ordershard_block_metastatus_transform_configssync_runs
All repositories extend a shared BaseRepository for consistent query patterns, pagination DTOs, and transaction handling via a dedicated TransactionService.
Testing
17 test suites. 532 tests. All passing.
| Module | Statement coverage | Tests |
|---|---|---|
| allotment | 99.63% | 6 suites (224 tests) |
| reservation | 72.24% | 5 suites |
| allotment-event | 57.89% | 2 suites |
| hardblock | 43.73% | 3 suites |
| external-data | 14.74% | 1 suite |
Jest 30 with @swc/jest for fast TypeScript compilation. Tests co-located with source files as *.spec.ts. The allotment module — the most complex one, with the resolver pipeline — sits at 99.63% coverage because the business logic is non-negotiable: a bug in the pipeline produces wrong availability, and wrong availability loses money.
Frontends
Two consumers of the API, each a Next.js 16 app on React 19 with Tailwind CSS 4 and App Router:
stay-sync-frontend— the operator portal on port 4000stay-sync-task-frontend— the agent-facing task app on port 4500
Both authenticate with Firebase. Both use the Swagger documentation at /api/docs as the contract surface.
External data sync
The external-data module is the ETL layer that syncs with the upstream MHG hotel supplier API (office-fastapi-dev1.mhgthailand.com/api/v1). It supports manual trigger via POST /sync and auto-sync via a file watcher that re-runs when source data changes. Webhook endpoints accept contract and event updates from external systems so updates arrive push-style instead of being polled.
Tech stack
NestJS 11, TypeScript, Drizzle ORM, PostgreSQL 17, Firebase Admin SDK (JWT auth), Jest 30 with @swc/jest, Zod, Swagger, Docker.
API prefix: /api/v1. Server runs on port 3001.
Takeaway
When integrating with an external system of record, build defensively. The upstream API will return unexpected shapes, drift between environments, and occasionally return the same room count in two different units. Validate everything at the wall and trust nothing past it.
The allotment module sits at 99.63% test coverage because the business logic is non-negotiable: a bug in the pipeline produces wrong availability, and wrong availability loses money. Invest the testing effort where the stakes are highest and let the less critical modules catch up later.
For a simpler full-stack reference project I maintain publicly (typed backend, auth, RBAC, Swagger-documented API), see PyNextStack.
