When I started migrating a monolithic system into a microservices architecture, the mission was simple: make the system easier to scale, easier to deploy, and easier to extend—without breaking the entire application every time a feature changed.
This post explains how I designed the architecture, structured each service, added communication between them, and deployed everything in containers.
Why I Chose Microservices
The old system had several limitations:
- A single codebase doing everything
- One broken module could break the whole system
- No way to scale individual features
- Deployments were slow and risky
- Developers constantly conflicted inside the same repo
Splitting the system into domain‑driven #microservices immediately solved these issues.
High-Level Architecture Overview
Each service handles one specific function:
- Auth Service — users, sessions, tokens
- Data Service — SQL Server operations
- Search Service — indexing and fast queries
- File Service — file uploads/processing
- Gateway — routing, authentication
- Frontend — React / Astro
┌──────────────────────┐
│ Frontend │
└───────────┬──────────┘
│
▼
┌──────────────────────┐
│ API Gateway │
└─┬───────┬───────┬────┘
│ │ │
▼ ▼ ▼
Auth Data Search File
The gateway receives all client requests, checks authentication, and forwards requests to the correct microservice.
Microservice Example: Python (Auth)
Here’s the structure I use for #Python services:
service-auth/
│
├── app/
│ ├── main.py
│ ├── routes/
│ │ ├── login.py
│ │ └── register.py
│ ├── models/
│ │ └── user.py
│ ├── core/
│ │ ├── database.py
│ │ └── security.py
│ └── utils/
│ └── token.py
│
├── requirements.txt
├── Dockerfile
└── config.yaml
A simple FastAPI endpoint:
from fastapi import FastAPI, HTTPException
from utils.token import create_token
app = FastAPI()
@app.post("/login")
def login(username: str, password: str):
if username != "admin":
raise HTTPException(status_code=401, detail="Invalid credentials")
return {"token": create_token(username)}
Dockerizing the Microservice
Each service runs in its own container using #Docker:
FROM python:3.11
WORKDIR /app
COPY requirements.txt .
RUN pip install -r requirements.txt
COPY . .
CMD ["uvicorn", "app.main:app", "--host", "0.0.0.0", "--port", "8000"]
Then orchestrated using Docker Compose:
version: "3.8"
services:
gateway:
build: ./gateway
ports:
- "3000:3000"
auth:
build: ./service-auth
environment:
- DB_HOST=db
depends_on:
- db
db:
image: mcr.microsoft.com/mssql/server:2022-latest
environment:
SA_PASSWORD: "StrongPassword1!"
ACCEPT_EULA: "Y"
Start everything:
docker compose up -d
Data Layer with SQL Server (Node.js)
The data service exposes #SQLServer through a clean, typed API using #NodeJS:
import sql from "mssql";
export async function getUser(id) {
const pool = await sql.connect(process.env.SQL_CONNECTION);
const result = await pool
.request()
.input("id", sql.Int, id)
.query("SELECT * FROM Users WHERE id = @id");
return result.recordset[0];
}
Service‑to‑Service Communication (Queues)
For async tasks (email, file processing, indexing), I used a lightweight job queue:
import Queue from "bull";
const emailQueue = new Queue("email");
emailQueue.process(async (job) => {
console.log("Sending email to", job.data.to);
});
Triggering a task from another service:
emailQueue.add({ to: "test@mail.com" });
API Gateway Logic
Using Express + proxy:
import express from "express";
import proxy from "express-http-proxy";
const app = express();
app.use("/auth", proxy("http://auth:8000"));
app.use("/data", proxy("http://data:8000"));
app.use("/search", proxy("http://search:8000"));
app.listen(3000);
This keeps the frontend extremely simple — only one API URL.
Deployment Overview
The deployed setup ran on #Azure:
- Azure Web App for Containers
- Azure SQL
- Azure Storage
- GitHub Actions CI/CD
- Auto-scaling triggers (CPU, latency, queue length)
Everything stays isolated but works together under the gateway.
What I Learned
- Smaller services encourage better designs
- Logging becomes critical
- Monitoring is mandatory
- Deployments get faster and safer
- #Scaling becomes much easier
- Adding new features doesn’t slow down old ones
Final Thoughts
Microservices are not the solution for everything — but for systems that need scaling, isolation, and safe deployments, they are incredibly effective. This #architecture drastically improved reliability and developer productivity while allowing the system to grow without fear of breaking everything.