Skip to content

System Architecture Overview

What This System Does

The Orbital Manager Backend is a microservices system that processes restaurant orders from the Otter POS system and distributes them to kitchen services. Orders flow through three main services that communicate asynchronously using RabbitMQ message queues.

High-Level Architecture

graph TB
    subgraph External
        O[Otter POS]
        F[Frontend]
        SF[Snowflake]
        PG[(PostgreSQL)]
    end

    subgraph "Order Management Service :8001"
        WH[Webhook Endpoint]
        OEC[Otter Event Consumer]
        RMQ_PUB[RabbitMQ Publisher]
        OMS_WORKERS[Sync Workers]
    end

    subgraph "Kitchen Batch Tool Service :8000"
        RMQ_CON[RabbitMQ Consumer]
        BI_API[Batch Items API]
        SSE[SSE Endpoint]
        KBT_WORKERS[Session Data Sync]
    end

    subgraph "Kitchen Prep Tool Service :8002"
        PREP_API[Prep Inventory API]
        TU_API[TU Inventory API]
        PAR_API[PAR Management API]
        AUTH_API[Authentication API]
        PREP_WORKERS[Background Workers]
    end

    subgraph "Shared Components"
        DB[Database Initialization]
        MSG[Message Broker]
        UTILS[Utilities]
        PAT[Patterns]
    end

    O -->|Webhooks| WH
    WH -->|Publish Events| RMQ_PUB
    RMQ_PUB -->|Otter Events| OEC
    OEC -->|Process & Save| PG
    OEC -->|Publish| RMQ_PUB
    RMQ_PUB -->|Protobuf Messages| RMQ_CON
    RMQ_CON -->|Process| BI_API
    BI_API -->|Events| SSE
    SSE -->|Real-time| F
    PREP_API -->|REST| F
    TU_API -->|REST| F
    PAR_API -->|REST| F

    OMS_WORKERS -->|Sync| SF
    KBT_WORKERS -->|Sync| SF
    PREP_WORKERS -->|Sync| SF

    BI_API -->|Store| PG
    PREP_API -->|Store| PG
    TU_API -->|Store| PG

    ML -.->|Uses| DB
    ML -.->|Uses| PAT
    RMQ_PUB -.->|Uses| MSG
    RMQ_CON -.->|Uses| MSG

How Services Communicate

All three services communicate through RabbitMQ, a message broker that enables asynchronous communication. Messages are serialized using Protocol Buffers (protobuf) for efficient, type-safe data exchange.

Key Communication Patterns: - FANOUT Exchanges: Messages published to an exchange are delivered to all queues bound to that exchange. This allows multiple services to receive the same message. - Protocol Buffers: All inter-service messages use protobuf for serialization, defined in shared/proto/. - Shared Code: Common functionality (database connections, message broker logic, utilities) lives in shared/ and is imported by each service.

The Three Services

1. Order Management Service (Port 8001)

What it does: Receives order webhooks from Otter POS, processes them, saves to the database, and publishes structured order data for other services to consume.

How it connects: - Receives: HTTP webhooks from Otter at /webhooks/otter - Publishes to RabbitMQ: - Raw Otter events → otter_events exchange - Processed structured orders → structured_order_data exchange - Consumes from RabbitMQ: Listens to its own otter_events queue to process events - Stores data: Saves orders to PostgreSQL (otter.orders schema) - Syncs data: Background workers sync customer and promotion data to Snowflake

Key Components: - Webhook endpoint that receives Otter events - RabbitMQ publisher that sends events to exchanges - Otter event consumer worker that processes events from the queue - Background sync workers for Snowflake data

2. Kitchen Batch Tool Service (Port 8000)

What it does: Consumes structured orders from RabbitMQ, calculates batch items (grouping items by kitchen station and prep time), and streams real-time updates to kitchen displays via Server-Sent Events (SSE).

How it connects: - Consumes from RabbitMQ: Listens to structured_order_data exchange for processed orders - Stores data: Saves batch items to PostgreSQL (batch_items, batch_item_sessions tables) - Streams to frontend: SSE endpoint (/orders/events) pushes real-time updates to connected clients - Reads reference data: Uses cached reference data from Snowflake (stations, items, brands) for batch calculations - Syncs data: Background worker syncs session data to Snowflake

Key Components: - RabbitMQ consumer that receives structured orders - Batch calculation logic that groups items by station and prep time - SSE endpoint for real-time frontend updates - REST API for batch item management

3. Kitchen Prep Tool Service (Port 8002)

What it does: Manages kitchen inventory, prep work, and product tracking. Provides REST APIs for prep inventory, transfer units (TU), PAR levels, and product scanning.

How it connects: - Provides REST APIs: Direct HTTP endpoints for frontend (prep inventory, TU, PAR management) - Publishes to RabbitMQ: Scan events → scans exchange - Stores data: All inventory data in PostgreSQL - Syncs data: Background workers sync PAR levels and generate PDF reports

Key Components: - REST APIs for inventory management - JWT-based authentication - Product scanning workflow - Background workers for data sync and PDF generation

Complete Order Flow

Here's how an order moves through the system from Otter to the kitchen display:

sequenceDiagram
    participant O as Otter POS
    participant OMS as Order Management Service
    participant RMQ as RabbitMQ
    participant KBT as Kitchen Batch Tool Service
    participant FE as Frontend

    O->>OMS: Webhook: New order event
    OMS->>RMQ: Publish to otter_events exchange
    RMQ->>OMS: Deliver to otter_events queue
    OMS->>OMS: Process event (normalize data)
    OMS->>OMS: Save order to PostgreSQL
    OMS->>RMQ: Publish structured order to structured_order_data exchange
    RMQ->>KBT: Deliver to structured_order_data queue
    KBT->>KBT: Calculate batch items (group by station/prep time)
    KBT->>KBT: Save batch items to PostgreSQL
    KBT->>KBT: Add event to SSE broadcast queue
    KBT->>FE: Stream event via SSE (real-time update)

Step-by-step breakdown:

  1. Otter sends webhook → Order Management Service receives HTTP POST at /webhooks/otter
  2. Publish raw event → Service immediately publishes the webhook payload to RabbitMQ otter_events exchange (FANOUT)
  3. Process event → Otter event consumer worker picks up the message from the queue, normalizes the data, and saves it to PostgreSQL
  4. Publish structured order → After processing, the service publishes a structured order (using protobuf) to structured_order_data exchange
  5. Kitchen Batch Tool consumes → Kitchen Batch Tool Service receives the structured order from its queue
  6. Calculate batch items → Service groups items by kitchen station and prep time, using reference data from Snowflake cache
  7. Store and stream → Batch items are saved to PostgreSQL, then pushed to all connected frontend clients via SSE

Data Storage and Transformation

PostgreSQL stores operational data: - Order Management Service: otter.orders, otter.order_items, otter.order_item_modifiers, otter.order_promotions - Kitchen Batch Tool Service: batch_items, batch_item_sessions - Kitchen Prep Tool Service: Prep inventory, transfer units, PAR levels, master products, authentication data

Snowflake stores analytical data: - All services have background workers that sync operational data to Snowflake for analytics - Kitchen Batch Tool uses Snowflake reference data (stations, items, brands) cached in memory

Message Format: All RabbitMQ messages use Protocol Buffers for serialization. The structured order format is defined in shared/proto/structured_order_data.proto.

For detailed order transformation logic, see Order Flow.

Code Organization

Monorepo Structure: All three services live in the same repository (orbital-manager-backend/), sharing common code through the shared/ directory.

Shared Components (shared/): - Database initialization (PostgreSQL, MongoDB, Snowflake) - RabbitMQ connection and message broker logic - Protocol Buffer definitions - Common utilities (logging, secret management, etc.) - Background service patterns

Service-Specific Code: - Each service has its own directory: order_management_service/, kitchen_batch_tool_service/, kitchen_prep_tool_service/ - SQL queries organized by database: sql/postgres_sql/ (operational), sql/snowflake_sql/ (analytical) - Kitchen Prep Tool uses repository pattern; other services use direct SQL modules - Workers live in app/workers/ within each service (never in shared/)

Path Resolution: Services manipulate sys.path in main.py to access shared/ modules. See order_management_service/app/main.py for example.

For detailed patterns, see Folder Structure Guide.

Background Workers

Background workers run scheduled or continuous tasks within each service. All workers are located in app/workers/ within their respective service (never in shared/).

Worker Patterns: - APScheduler: For scheduled/cron tasks (e.g., daily syncs) - BackgroundService: For continuous background tasks (e.g., RabbitMQ consumers)

Workers by Service:

Order Management Service: - CustomerDataSyncWorker: Daily sync of customer data from PostgreSQL to Snowflake - PromotionsDataSyncWorker: Syncs promotion data to Snowflake - MissedOrderSyncWorker: Handles missed orders

Kitchen Batch Tool Service: - SessionDataSyncWorker: Syncs session data to Snowflake

Kitchen Prep Tool Service: - ParsSyncWorker: Weekly PAR level sync to Snowflake - PrepSlackPdfWorker: Generates prep inventory PDFs and sends via Slack - TuSlackPdfWorker: Generates transfer unit inventory PDFs and sends via Slack

Workers are started and stopped in each service's main.py using the lifespan context manager. See Best Practices for implementation guidelines.

System Properties

Resilience

  • Graceful degradation: Services start even if dependencies (RabbitMQ, databases) are temporarily unavailable
  • Auto-retry: RabbitMQ consumers automatically retry failed messages
  • Fallback caches: Reference data caches (e.g., ReferenceDataCache in Kitchen Batch Tool) provide in-memory fallback if Snowflake is unavailable
  • Clean shutdown: 5-second grace period for in-flight requests during shutdown

Scalability

  • Independent scaling: Each service can be scaled independently based on load
  • Multiple consumers: RabbitMQ FANOUT exchanges allow multiple instances of the same service to consume messages (load balancing)
  • Caching: Redis Cache for batch items, item mappings, recipes, and conversion factors
  • In-memory caches: Reference data caches reduce database load

Security

  • Secrets management: Azure Key Vault for production secrets (accessed via shared/utils/secret_manager.py)
  • Error tracking: Sentry integration for error monitoring and alerting
  • Authentication: JWT-based authentication in Kitchen Prep Tool Service

Monitoring

  • Health checks: All services provide /ping endpoints
  • Structured logging: All services use shared/utils/logger.get_logger() for consistent, structured logs
  • Error tracking: Sentry configured via SENTRY_DSN - see Sentry Dashboard
  • Issue tracking: Errors linked to Linear tickets

For observability tools and usage, see Team Resources.

Next Steps