Posts

Real‑Time Stock Trading Dashboard Updates

How to update hundreds of UI components instantly when a new Buy Order is created Trading UIs feel “alive” because they react to backend events in milliseconds: order book rows move, depth changes, charts tick, alerts fire, and positions recalc—often at the same time . In interviews, this question tests whether you can design a system that’s: Low latency (real-time UX) Scalable (many users, many symbols, many updates/sec) Efficient (don’t re-render everything) Correct (ordering, duplicates, reconnection) Secure (authz, least privilege, data boundaries) This blog explains a production-grade approach you can reuse long-term. 1) The core challenge Event: Backend receives/creates a new Buy Order Requirement: “ Update 100s of UI components instantly ” Hidden complexity: Not every component cares about every order Updates can be bursty (market open, news spikes) UIs must survive disconnects and still be correct You can’t blast full snapshots repeatedly without melting bandwidth and ...

DB Isolation Levels (ACID) Explained — From Dirty Reads to Serializable (with Spring Boot examples)

Picture this: you and your friend are trying to book the last movie ticket on a Saturday night. You both open the app, see “1 seat left”, and smash the “Book Now” button at the same time. In one universe: only one of you gets it (correct). In another universe: both of you get “Booking confirmed” (😬 oversold seat). In the worst universe: the app charges both of you and then refunds one later (support ticket nightmare). That “multiple people doing things at the same time” problem is what transaction isolation is trying to make boring, predictable, and safe. ACID in 2 minutes (but we’ll zoom into “I”) ACID is a set of guarantees databases try to provide for transactions: A — Atomicity : All or nothing (either the booking is done, or it’s as if it never happened). C — Consistency : Constraints/invariants hold (no negative inventory, no duplicate unique keys, etc). I — Isolation : Concurrent transactions shouldn’t step on each other in surprising ways. D — Durability : Once committed, i...

PACELC, Paxos, and Raft: How “Consistency vs Latency” Shows Up in Real Systems

Distributed systems design is mostly about choosing which pain you’re willing to live with—because you can’t eliminate it. The PACELC theorem is a practical lens for those choices, and Paxos and Raft are two of the most important tools engineers use when they decide “we’re going to pay latency to buy correctness.” This post ties them together: PACELC tells you what trade-off you’re making Paxos/Raft are two ways to implement the “consistent” side of that trade-off You’ll see concrete examples, message flows, and how partitions change behavior Why CAP isn’t enough, and why PACELC exists You likely know the CAP -style story: P artition happens → you must choose C onsistency or A vailability. The missing piece is: most of the time you’re not partitioned —you’re just dealing with latency, replication delay, tail latencies, and node slowness. PACELC adds the everyday reality: If there is a Partition (P) : you choose Availability (A) or Consistency (C) Else (E) (normal operation, n...

From PACELC to Byzantine Consensus

Why Raft/Paxos stop being enough, and what PBFT/HotStuff/Tendermint do differently You wrote “BGP” as the next, harder consensus problem. In distributed-systems literature, that usually means the Byzantine Generals Problem (not the Internet routing protocol also called BGP ). The Byzantine Generals Problem is the classic way to talk about Byzantine faults —nodes that can lie, equivocate, or behave maliciously. This post connects the dots: PACELC : the trade-offs you’re always making (partition vs. else; latency vs. consistency) Crash-fault consensus ( Paxos/Raft ): great when nodes fail benignly Byzantine consensus ( PBFT , HotStuff , Tendermint ): needed when nodes can be arbitrary/malicious Concrete, easy-to-follow examples and “how the algorithm actually moves messages” 1) PACELC sets the stage: consistency is never “free” PACELC (Abadi) is basically: If there’s a Partition (P) → choose Availability (A) or Consistency (C) Else (E) (no partition) → you still trade Latency (L) ...

CAP Theorem, Explained: Why Distributed Systems Can’t Have It All

If you’ve ever built (or debugged) a distributed system, you’ve felt it: the moment when the network misbehaves, nodes stop talking, and your system has to decide what “correct” means under failure. That trade-off is exactly what the CAP theorem (also called Brewer’s theorem ) puts into words: In the presence of a network partition , a distributed system must choose between consistency and availability . People often summarize it as “you can’t have Consistency, Availability, and Partition tolerance all at once.” That slogan is useful, but the real value of CAP comes from understanding what each term means—and what the trade-off looks like in real production systems. The Three Letters: C, A, and P CAP is about a distributed system (multiple nodes) where messages travel over a network that can fail. Consistency (C) All clients see the same data at the same time. More precisely (in the CAP discussion), “consistency” usually means something close to linearizability : once a write compl...