SSE vs Polling: Why Real-Time Inbox Push Matters

Polling is the oldest temp-mail UX pattern: refresh every five seconds, hope something arrived. SSE replaces it with a persistent connection that pushes mail the instant it lands. The difference is bigger than it sounds.

Mail.cx team··6 min read·product

Open any disposable email site from the last decade and you'll see the same UX pattern: a counter that says "checking for new mail in 4 seconds...3...2...1..." Refresh. Empty. Wait again. This is polling, and it's been the default approach to temp-mail since the early 2010s. mail.cx replaces it with Server-Sent Events. The difference matters more than it might sound.

Table of Contents

What is polling?

Polling is the simplest possible way to get new data: ask the server "anything new?" on a fixed interval. Most disposable email sites poll every 5 to 30 seconds.

Here's the rough sequence:

  1. Page loads, opens a timer.
  2. Every N seconds, the timer fires.
  3. The browser sends a GET /messages request to the server.
  4. The server responds with all messages (or "no new messages").
  5. If new messages, the UI updates. Otherwise, wait for the next tick.

Pros: dead simple to build, dead simple to debug, works on any HTTP server.

Cons:

  • Latency: an email that arrives between polls waits up to N seconds to appear.
  • Wasted bandwidth: every poll is a network round-trip whether or not there's anything new. On a 5-second interval, that's 720 wasted requests per hour per user.
  • Refresh-button thrash: users learn to mash refresh when they're in a hurry, fighting the polling timer. The refresh button is a UX failure mode.

What is SSE?

Server-Sent Events (SSE) is the opposite shape: the browser opens one persistent connection to the server, and the server pushes data down it whenever something happens.

The sequence:

  1. Page loads, opens an EventSource to /v1/sse/addr?token=....
  2. The server holds the connection open.
  3. Whenever a new email arrives for this address, the server writes a one-line event down the connection.
  4. The browser receives the event, updates the UI immediately.
  5. The connection stays open until the user navigates away.

Pros:

  • Sub-second delivery: new mail appears as fast as the network round-trip — typically under 100ms after server receives the email.
  • No wasted requests: one connection, used only when there's data.
  • No refresh button needed: the inbox is always live.

Cons:

  • One persistent connection per user: more server resources than stateless polling.
  • Reconnect logic required: if the connection drops (network blip, proxy timeout), the client needs to reopen it.
  • One-way: server-to-client only. For client-to-server, you still need normal HTTP requests.

The persistent-connection cost is the main reason cheaper temp-mail services don't use SSE. It's also one of the reasons mail.cx is built differently.

Why the latency difference matters

For someone idly waiting for a marketing email, a 5-second polling delay is invisible. For most disposable email use cases, it isn't:

  • OTP and 2FA codes: many services give you 60 seconds to enter the code. If polling delays the email by 10 seconds, you've lost a sixth of your window. SSE delivers in under a second; you have the full 60.
  • Time-sensitive verification flows: "click this link in the next 60 seconds to verify your account". Polling can eat half the window before you see the link.
  • Just feeling fast: SSE makes the inbox feel responsive, like a chat app. Polling makes it feel like an email client from 2008.

Once you've used a push-based temp-mail service, going back to polling is jarring.

How mail.cx implements SSE

The architecture is straightforward:

  • The frontend opens GET /v1/sse/addr?token=<anon-token> in an EventSource.
  • The Go backend authenticates the token and subscribes to the Redis pub/sub channel for that mailbox address (channel:addr:<address>).
  • When the SMTP gateway receives a new email, the ingest worker writes it to Redis and PUBLISHes a message on the channel.
  • The pub/sub subscriber in the API layer forwards the event down all open SSE connections for that address.
  • The browser receives the event in under 100ms (network-bound) and updates the inbox.

The whole pipeline is tuned to keep the steady-state cost low — one TCP connection per user, idle most of the time, occasional small messages. A single API node handles thousands of concurrent SSE connections without breaking a sweat.

Connection management

Long-lived HTTP connections have a few real-world failure modes:

  • Network blips: the user switches Wi-Fi, walks into a tunnel, suspends the laptop. The connection drops.
  • Proxy timeouts: some corporate or ISP proxies kill HTTP connections after 30-60 seconds of no data.
  • Cloudflare and friends: most CDNs are SSE-friendly but have their own connection limits.

mail.cx handles these the standard way:

  • The browser's built-in EventSource auto-reconnects with exponential backoff (3-30 seconds).
  • The server sends a heartbeat comment (:) every 25 seconds to keep proxies happy.
  • The client reconnects with a Last-Event-ID so any messages buffered during disconnect are replayed.

For the user, all this is invisible. The inbox stays live across network blips, sleeps, and proxy weirdness.

Polling fallback?

A few legacy clients can't do SSE — old Internet Explorer, restrictive enterprise networks that block long connections entirely. mail.cx's REST API supports normal GET /v1/inbox/:address/emails for these cases. The web UI doesn't fall back to polling — if SSE fails three reconnect attempts, we show "reconnecting..." and let the user trigger a manual refresh. In practice this never happens for real users.

Why it's not just SSE — it's the architecture

The real win isn't "SSE vs polling" as a single technical choice. It's that mail.cx is designed end-to-end around real-time delivery:

  • SMTP receives mail directly into NATS JetStream (no spool, no batch processing).
  • Workers consume from JetStream and write to Redis + publish to the pub/sub channel in one atomic operation.
  • API nodes hold SSE connections and bridge pub/sub events to clients.

Total time from "SMTP server accepts the email" to "browser shows the new message" is typically 50-200ms. With polling on top of any backend, the median latency is bounded by the polling interval.

A polling-frontend on top of mail.cx's backend would still be 5-30 seconds slower per email than the SSE frontend. The wire-protocol choice matters.

When polling is actually fine

To be fair: polling is the right answer for some use cases. Specifically:

  • Notification dashboards that update every few minutes anyway.
  • CI/CD email checks in tests that just want to know if any email arrived in the last minute.
  • Simple scripts where the latency doesn't matter and "one HTTP call every 30 seconds" is easier to debug than "stream parsing".

For the human-watching-an-inbox use case, SSE wins.

Conclusion

Real-time delivery isn't a feature; it's a design choice. mail.cx made that choice and the rest of the architecture follows. If you've spent the last decade clicking refresh on temp-mail.io, give a push-based service a try — the difference is real and you'll feel it in the first 30 seconds.

Frequently asked questions

Is SSE faster than polling for real users?

Yes — typically by 2-30 seconds per email. With 5-second polling, an email that arrives one second after the last poll waits 4+ seconds. SSE delivers the same message in well under a second.

Why don't all temp mail services use SSE?

SSE requires a persistent connection per user, which costs more server resources than polling. For services running on cheap shared hosting it's a non-trivial change. mail.cx was built around SSE from day one.

Does SSE work behind corporate firewalls?

Almost always. SSE is plain HTTP, no special protocol. Some restrictive proxies break long-lived connections; mail.cx auto-reconnects with backoff in those cases.