Skip to main content
Building for Scale: Our Microservices Architecture

Building for Scale: Our Microservices Architecture

Hitaji TechnologiesDecember 20, 20258 min read
Back to all articles

A look inside the technical architecture behind Hitaji 360. Why we chose microservices, how we handle multi-tenancy and data isolation, and the honest trade-offs of building a multi-product platform.


A look inside the technical architecture behind Hitaji 360 — why we chose microservices, what we learned, and how it enables us to serve multiple industries from a single platform.

Why Microservices

When we started building Hitaji 360, we knew it needed to scale. Not just in terms of users, but in terms of products. We were not building a single application — we were building a platform that would need to support education, retail, legal, agricultural, hospitality, and nonprofit verticals, each with their own data models and business logic, all sharing common services like identity, messaging, and payments.

A monolithic architecture would have been simpler initially, but it would have become a bottleneck fast. According to Gartner’s application architecture research, organisations that adopt microservices report 50% faster deployment cycles and 30% fewer production incidents compared to monolithic systems, primarily because teams can deploy, test, and scale individual services independently.

For a multi-product platform like Hitaji 360, this independence is critical. A change to the accounting module should not risk breaking the messaging system. A spike in payment processing should not slow down the school management portal.

Our Stack

Here is what powers Hitaji 360 under the hood:

  • Node.js with NestJS — Our primary backend framework, chosen for its TypeScript support, modular architecture, and strong ecosystem. NestJS’s dependency injection and module system map naturally to microservice boundaries.
  • PostgreSQL — The backbone for structured, relational data: accounting entries, student records, case files, inventory items. PostgreSQL’s JSONB columns give us flexibility for feature-specific data without sacrificing relational integrity.
  • MongoDB — Used for document-heavy, schema-flexible data like chat messages, form submissions, and AI conversation history where the structure varies widely between records.
  • Redis — Caching layer for frequently-accessed data like user sessions, tenant configurations, and API rate limiting. Reduces database load and keeps response times low.
  • Socket.IO — Powers real-time features: live chat, instant notifications, and collaborative updates. When a teacher marks attendance, the head teacher’s dashboard updates in real time.
  • Docker — Every service is containerised, making deployments reproducible and environment-independent. We run the same containers in development and production.
  • DigitalOcean — Our cloud infrastructure, chosen for its straightforward pricing and strong presence in emerging markets.

Multi-Tenancy: One Platform, Isolated Data

One of the most important architectural decisions we made was how to handle multi-tenancy. Every organisation on Hitaji 360 gets its own isolated data environment, but shares the same application infrastructure.

We use a hybrid approach: shared application services with per-tenant database isolation for sensitive data. A centralised tenant management service handles configuration, database provisioning, and connection routing. When a new organisation signs up, their databases are provisioned automatically, credentials are encrypted and stored securely, and the application routes their requests to the correct data stores.

This approach gives us the cost efficiency of shared infrastructure with the security guarantees of data isolation — which is non-negotiable when you are handling student records, financial data, and medical information.

API Design and Service Communication

With 9+ deployed services, API design discipline is essential. We follow REST conventions with consistent pagination, filtering, and error response patterns across all services. Every API endpoint is authenticated via OAuth2/OIDC tokens, with scope-based access control that determines what each user can see and do.

Inter-service communication uses a combination of synchronous REST calls (for operations that need immediate responses) and event-driven patterns (for operations that can be processed asynchronously, like sending notifications after a payment is confirmed).

Lessons Learned

Microservices are not free. Here are the honest trade-offs we have navigated:

Operational complexity is real. Monitoring 9+ services requires proper observability. We use structured logging, health check endpoints, and container orchestration to keep everything running. A single service failure should degrade gracefully, not bring down the platform.

Data consistency requires discipline. When data spans multiple services, maintaining consistency requires careful design. We use database transactions within services and eventual consistency patterns between services, with idempotency keys to prevent duplicate operations.

Developer experience matters. Each service needs to be independently runnable for development. We maintain Docker Compose configurations that let engineers spin up the services they need without running the entire platform locally.

Deployment pipelines are critical. With multiple services, manual deployment is not an option. We use containerised builds with automated health checks. If a deployment fails the health check, it rolls back automatically.

For a platform like Hitaji 360 that serves multiple products and tenants across different industries, the trade-offs are worth it. The architecture gives us the flexibility to add new products, scale individual components, and deploy changes independently — which is exactly what a growing platform needs.

Sources


Want to learn more about the technology behind Hitaji 360? Get in touch with our engineering team, or explore the products built on this architecture.

Written by Hitaji Technologies

Hitaji Technologies