Posts

Concurrency Control from First Principles

3 strategies on a single node, 3 strategies across multiple nodes — with real-life examples, production patterns, and future-ready guidance Credits / Acknowledgements This blog is based on detailed discussions and whiteboarding sessions with Sourabh Kumar Banka and Jatin Goyal . Why you should care (even if things “work fine” today) Race conditions don’t usually show up in development. They show up when: traffic spikes, retries kick in, background jobs overlap, autoscaling adds more instances, latency increases (so overlaps happen more often). If you’re building modern systems (cloud, microservices, async workflows, distributed caches), you’re going to face concurrency whether you like it or not. The core idea to remember: Every correct system serializes updates somewhere. Your design decision is where that serialization happens and what trade-offs you accept. First principles: What is a race condition? A race condition exists when all three are true: Shared mutable state Something...

Concurrency Control from First Principles

  Concurrency Control from First Principles 3 Strategies on a Single Node, 3 Across Multiple Nodes — with Real-Life Analogies and Future-Ready Patterns Credits / Acknowledgements This article is based on deep technical discussions and whiteboarding sessions with Sourabh Kumar Banka and Jatin Goyal . Why This Matters (Now and in the Future) Most real production failures are not caused by wrong business logic. They are caused by incorrect ordering of updates . As systems scale — microservices , distributed caches , cloud-native deployments, async retries, autoscaling — concurrency issues increase, not decrease. If you remember only one thing from this article: Concurrency control is about deciding where updates become ordered (serialized) — and intentionally paying the right trade-off. Every correct system enforces order somewhere: Database Application Distributed coordinator Event log Workflow engine If you don’t choose where, contention will choose for you. First Principles: Why...

Concurrency Control from First Principles

3 ways on a single node, 3 ways across multiple nodes — with real-life examples Race conditions aren’t a “database problem” or a “threading problem”. They’re a physics-of-computing problem : two actors try to change the same thing, and time doesn’t give you a single, obvious order. This blog explains concurrency using first principles, then maps that to the 3 most common strategies on a single node and the 3 most common strategies in a distributed (multi-node) system , with practical examples you can reuse. First principles: what causes a race condition? A race condition exists when all three are true: Shared mutable state Something can be changed (a DB row, cache entry, file, in-memory map). Concurrent actors Two+ threads/processes/nodes can touch it “at the same time”. Non-atomic read → compute → write The update is not a single indivisible step. The classic shape is: Read current state Compute new state Write new state If two actors do this concurrently, you can violate invariant...

How Binary Exponentiation Helps Us Find Prime Numbers

From “Is this number prime?” to “We can test this fast—even for huge numbers” At first glance, prime numbers feel like a school topic. But then you step into: cryptography security systems distributed systems blockchain authentication protocols And suddenly, primes aren’t academic anymore — they are foundational to modern computing . The real question becomes: How do we check whether a number is prime when the number itself is enormous? That’s where binary exponentiation quietly becomes one of the most important tools you’ll ever learn. Scene 1: The Naive Prime Check (and why it breaks) Let’s start simple. To check if a number n is prime, the basic idea is: try dividing n by numbers from 2 to n-1 This works… until it doesn’t. Why this fails in real systems If n has 100 digits , you cannot try dividing it Even checking divisibility up to √n is impossible Cryptographic primes are hundreds or thousands of bits long At this point, brute force is dead. So engineers asked a smarter qu...

Binary Exponentiation & Modular Arithmetic

From “Why is this confusing?” to “This is obvious now” Some concepts in computer science don’t feel hard because they are complex — they feel hard because we try to compute instead of understanding . Binary exponentiation is one such concept. This blog captures a real learning journey: starting with confusion around large exponents and modular arithmetic , and ending with a clear, reusable mental model you’ll never forget. 1. The Problem That Started It All We were asked to compute: 2¹⁰¹ mod 7 At first glance, it feels intimidating: 101 is a large exponent Direct multiplication is impossible Modulo arithmetic feels tricky But the key realization is this: The problem is not “big numbers” — the problem is “wrong approach”. 2. Why Naive Exponentiation Fails The naive way: 2 × 2 × 2 × … (101 times) Problems: Takes O(n) time Numbers grow extremely large Completely impractical in real systems In cryptography , competitive programming, and system algorithms, this approach is unusable. So th...

🚀 A Practical Coursera Roadmap for Java, CS, Concurrency, DevOps & Data Science

Learn by examples, not memorization — for long-term engineering growth Most engineers search for “best course for Java / Kafka / DevOps” . The real question should be: Which courses build thinking that still helps after 5–10 years? This blog is a discussion-driven roadmap , not a marketing list. Every recommendation below is chosen because it: Builds mental models Explains why systems behave the way they do Helps in service-based + product-based interviews Pays off in real production systems 🧠 First: How to Think About Learning (Important) Learning tech stacks without fundamentals is like: Buying power tools without understanding wood 🪚 Driving fast without knowing brakes 🛑 So the roadmap flows like this: Computer Science & Concurrency → how machines behave Java (Modern) → how code executes Spring, WebFlux, Kafka → how services communicate DevOps & Cloud → how software reaches users Data Science → how systems learn from data 1️⃣ Computer Science & Concurrency (The ...

Tomcat vs Jetty vs GlassFish vs Quarkus — A Deep, Story-Driven Guide (with Eureka)

  A blog that makes the choice crystal clear for both freshers and senior engineers The story: SkyHospital’s Java journey (and why “server choice” is never random) Meet SkyHospital — a product company building hospital software: Admin portal for hospital staff Doctor dashboard Patient appointment booking Billing & reports Notifications (email/SMS/WhatsApp) Eventually: multiple microservices They start small, then grow, and at each stage their “best server choice” changes. This is exactly how it happens in real life. Chapter 1 — The first release (Tomcat enters) SkyHospital’s first app is a single codebase: JSP pages for Admin UI Spring MVC controllers A couple of servlets and filters Simple SQLite database (for MVP) Packaged as a WAR file Deployment reality: The company has: 1 VM 1 admin who restarts services A release once a week Minimal monitoring The CTO says: “I want something stable that every Java engineer understands.” ✅ Why they choose Tomcat Tomcat is a servlet contai...