Real‑Time Stock Trading Dashboard Updates

How to update hundreds of UI components instantly when a new Buy Order is created

Trading UIs feel “alive” because they react to backend events in milliseconds: order book rows move, depth changes, charts tick, alerts fire, and positions recalc—often at the same time.

In interviews, this question tests whether you can design a system that’s:

  • Low latency (real-time UX)

  • Scalable (many users, many symbols, many updates/sec)

  • Efficient (don’t re-render everything)

  • Correct (ordering, duplicates, reconnection)

  • Secure (authz, least privilege, data boundaries)

This blog explains a production-grade approach you can reuse long-term.


1) The core challenge

Event: Backend receives/creates a new Buy Order
Requirement:Update 100s of UI components instantly

Hidden complexity:

  • Not every component cares about every order

  • Updates can be bursty (market open, news spikes)

  • UIs must survive disconnects and still be correct

  • You can’t blast full snapshots repeatedly without melting bandwidth and browsers


2) The mistake interviewers expect you to avoid

Naive option A: Polling

UI calls GET /orders every 1 second.

  • Too slow (1s is not “real-time”)

  • Too expensive (N users × polling interval)

  • Doesn’t scale at peak

Naive option B: Backend “pushes to components”

Server tries to “notify each widget”.

  • Tight coupling: backend shouldn’t know UI composition

  • Hard to evolve: every new widget changes backend requirements

The right mental model is:
Backend publishes events. Clients subscribe. UI reacts via shared state.


3) The recommended architecture

Event-driven + push to clients

Order Service
   |
   |  (BuyOrderCreated event)
   v
Event Broker (Kafka / PubSub / Redis Streams)
   |
   |  (fan-out)
   v
Realtime Gateway (WebSocket / SSE)
   |
   v
Client App (central state store)
   |
   v
UI components subscribe/select state slices

Why this works:

  • Decoupled: backend doesn’t care how many components exist

  • Scalable: broker + gateway scale horizontally

  • Efficient: UI updates state once; components re-render selectively


4) Backend design: emit events, don’t ship full state

When an order is created, publish a delta event.

Example event payload (delta)

{
  "type": "BUY_ORDER_CREATED",
  "orderId": "ORD-123",
  "symbol": "AAPL",
  "price": 180.25,
  "qty": 100,
  "timestamp": "2026-01-13T14:22:05.123Z",
  "sequence": 998877,
  "tenantId": "T1"
}

Key fields that matter in production:

  • orderId → idempotency / dedupe

  • symbol → routing / partitions / topic design

  • sequence or broker offset → ordering logic

  • tenantId → isolation/security boundary

Where to publish

Use:


5) Fan-out: why you want a broker

A broker lets multiple consumers react to the same event without coupling:

This prevents your Order Service from becoming a “god service” that must call everything.


6) Realtime delivery: WebSockets vs SSE

WebSockets (most common for trading)

  • bi-directional

  • good for subscriptions (“subscribe to symbols”)

  • low overhead per message

SSE (Server-Sent Events)

  • simpler, one-way push

  • works well when client only needs updates

  • can be easier through proxies (but still needs tuning)

For a trading dashboard, WebSockets are usually the best default.


7) Avoid updating 100 components directly: use a central client store

The key UI idea:

Update application state once; let components derive what they need.

If you “imperatively” call updateOrderBook(), updateChart(), updateTicker()… you’ll create:

  • duplicated logic

  • inconsistent state between widgets

  • performance problems

Instead, use a state store (Redux/Zustand/MobX/RxJS), then have components subscribe to slices.

Client-side event handling (conceptual)

socket.onmessage = (msg) => {
  const event = JSON.parse(msg.data);
  if (event.type === "BUY_ORDER_CREATED") {
    orderStore.applyBuyOrderCreated(event);
  }
};

Store update should be idempotent

applyBuyOrderCreated(event) {
  if (this.seenOrderIds.has(event.orderId)) return;
  this.seenOrderIds.add(event.orderId);

  // Update normalized entities / order book deltas
  this.ordersById[event.orderId] = event;
  this.orderBookBySymbol[event.symbol] = applyDelta(
    this.orderBookBySymbol[event.symbol],
    event
  );
}

Components subscribe only to relevant slices

  • OrderBook widget subscribes to orderBookBySymbol[AAPL]

  • Chart subscribes to priceTicks[AAPL]

  • Alerts subscribes to orderEvents filtered by user rules

This reduces “everything re-renders on every event.”


8) Performance techniques that matter in real life

A) Selective subscriptions (server + client)

Don’t broadcast everything to everyone.

Client tells gateway:

{ "type": "SUBSCRIBE", "symbols": ["AAPL", "TSLA"], "channels": ["ORDER_BOOK", "TRADES"] }

Gateway only pushes relevant updates.

B) Batch updates during bursts

Markets can produce thousands of events/second.
Instead of re-rendering per message:

  • buffer for 50–100ms

  • apply a batch

  • render once

This preserves “real-time feel” without UI thrash.

C) Virtualize big lists

Order books and trade tapes can be huge.
Use list virtualization so only visible rows render.

D) Normalize data

Store entities by ID (ordersById) and derive views via selectors.
This avoids copying large arrays repeatedly.


9) Correctness: ordering, duplicates, and reconnects

Trading UIs can’t drift.

A) Ordering

Events for the same symbol should be ordered.
Common technique:

  • broker partitions by symbol

  • consumer reads in order per partition

B) Duplicate events

Many systems deliver at-least-once.
Solution:

  • keep a seenOrderIds set (with TTL)

  • or track lastSequence per symbol

C) Reconnect strategy: snapshot + delta

When a client reconnects:

  1. fetch a snapshot (REST)

  2. resume streaming deltas from a known sequence/offset

Example:

  • Client last seen sequence=998870

  • On reconnect: request snapshot + subscribe from 998871

This is the difference between a demo and a real product.


10) Backend gateway scaling

A WebSocket gateway must handle:

  • many concurrent connections

  • efficient message routing by symbol

  • throttling/backpressure

Typical scaling approach:

  • stateless gateways behind a load balancer

  • each gateway consumes from broker

  • gateway routes messages to subscribed sessions

If you need cross-node session routing, you can:

  • shard subscriptions by symbol

  • or use a lightweight pubsub layer between gateways


11) Security & access control (don’t skip this)

Interviewers love when you mention:

  • authenticate WebSocket with JWT

  • authorize subscriptions (user can only subscribe to allowed markets/tenants)

  • rate-limit subscriptions and message volume

  • validate payload schemas

  • protect against replay (sequence checks)

In trading systems, leaking order flow is a major risk.


12) Observability: how you prove it works

Metrics to mention:

  • event lag (broker → gateway → client)

  • dropped messages / reconnect rate

  • UI render time / frame drops

  • per-symbol message throughput

  • gateway CPU/memory and connection counts

Logs/traces:

  • correlate orderId through the pipeline

  • sample messages at controlled rate (don’t log every tick)


13) A strong interview answer (what you say out loud)

“I’d use an event-driven design: when a buy order is created, the Order Service publishes a BuyOrderCreated event to a broker. A realtime gateway consumes and pushes deltas to clients via WebSockets, scoped by symbol subscriptions. On the client, I keep a central state store and update it once per event/batch; UI components subscribe to slices so only relevant widgets re-render. For correctness, I handle ordering per symbol, dedupe by orderId/sequence, and on reconnect do snapshot + delta replay. I’d also add throttling, batching, virtualization, and observability metrics for latency and lag.”

That answer signals:

  • architecture maturity

  • UI performance awareness

  • reliability instincts


14) Common follow-up questions (and what they’re testing)

  1. “What if events spike 10× at market open?”
    → batching, backpressure, dropping non-critical channels, prioritization

  2. “How do you avoid inconsistent widgets?”
    → single source of truth store + derived selectors

  3. “How do you ensure ordering?”
    → partitioning by symbol + sequence checks

  4. “What happens on reconnect?”
    → snapshot + resume from offset

  5. “How do you secure subscriptions?”
    → authN/authZ + tenant isolation + rate limits


Cheat sheet: what to remember long-term

  • Push, not polling

  • Event broker for decoupling and fan-out

  • WebSockets for realtime delivery

  • Central store on client

  • Selective subscriptions

  • Batching + virtualization

  • Ordering + idempotency + snapshot/resume

  • Security + observability


If you want, I can also provide:

  • a simple diagram + message flow you can draw on a whiteboard

  • a React + Redux/Zustand concrete example

  • a Java/Spring backend pseudo-implementation for producer + gateway

  • common trade-offs between Kafka vs Redis vs managed Pub/Sub

Comments