How the system ensures performance, security, and scalability in production, and how the architecture is described through the 4+1 architectural view model.
Concrete strategies for ensuring the three most critical quality attributes.
Card validation completes in under 300ms at the 99th percentile during peak commuter hours, ensuring boarding throughput of at least 30 passengers per minute per terminal.
The system achieves OWASP Top 10 compliance. Sensitive financial data is isolated behind the Menged API boundary, minimizing PCI-DSS scope to the card tokenization layer.
The system scales from 500 to 10,000 concurrent users by adding server instances without database schema changes or code modifications.
Multi-zone deployment topology for high availability.
Five complementary perspectives that together describe the complete architecture.
The logical view describes the system's functional decomposition. Abokabot is organized into five bounded domains: Identity (user registration, authentication, roles), Ticketing (ticket issuance, validation, QR generation), Routing (route registry, schedule management, stop data), Payments (fare calculation, Menged integration, transaction ledger), and Administration (user management, audit logs, reporting). Each domain is modeled as a package in the class diagram. Inter-domain communication occurs through well-defined service interfaces, ensuring loose coupling. The FareEngine crosses domain boundaries through a dependency-inverted interface so both the Ticketing and Payments domains can use it without circular imports.
The development view describes the codebase organization. The monorepo is divided into three top-level modules: core (shared types, utilities, configuration), services (auth-service, ticketing-service, fare-service, route-service, admin-service), and adapters (menged-adapter, postgres-adapter, redis-adapter). Each service is independently testable and deployable as a Docker container. Build pipelines run per-service tests in parallel, reducing CI time. The adapter layer abstracts all external dependencies, making each service's unit tests independent of real infrastructure. Trunk-based development is used with feature flags to avoid long-lived branches.
The process view captures runtime behavior and concurrency. The API Gateway runs a single Node.js process using non-blocking I/O to handle thousands of concurrent connections. Each backend service runs as a separate OS process in its own container, providing fault isolation. The Fare Calculation Engine is CPU-intensive and runs in a dedicated worker process pool (4 workers). Boarding event processing is asynchronous: the Ticketing Service publishes events to RabbitMQ, and consumer processes (AuditLogger, FraudDetector, DashboardNotifier) process them independently. This prevents slow consumers from blocking the critical boarding validation path.
The physical view maps software components to hardware infrastructure. The production environment consists of two availability zones in the same region. Each zone hosts API Gateway replicas, service replicas, and a PostgreSQL node. Zone A holds the primary database; Zone B holds the hot standby. Redis runs as a cluster across both zones with automatic failover. Boarding terminals communicate to the nearest zone's load balancer. The Menged integration calls route through a dedicated egress IP (whitelisted by Menged), with a secondary egress IP as backup. All servers run on Ubuntu 22.04 with Docker and Kubernetes orchestration managing container lifecycle and health checks.
The use case view ties all other views together through key usage scenarios. The primary scenario is "Passenger boards bus via Menged card tap." This scenario exercises the Logical view (Ticketing and Payment domains), the Development view (ticketing-service and menged-adapter modules), the Process view (async event publication after boarding), and the Physical view (terminal → Zone A gateway → Ticketing Service → Menged API → DB commit). A secondary scenario, "Admin updates fare table," exercises the Administration domain, PostgreSQL write path, and Redis cache invalidation. These scenarios validate that all architectural decisions compose correctly end-to-end.