In today’s distributed system architectures, we break large systems into small, independent services. These services need a reliable way to talk to each other — and message queues or event streaming platforms play a critical role in enabling this communication.
But here's the key question:
👉 How reliably is a message delivered from sender to receiver?
Let’s break down the three core delivery semantics you’ll encounter in real-world systems 👇
1️⃣ At-Most Once
🔹 Messages are delivered zero or one time
🔹 No retries, so if something fails — the message is lost
🔹 Simple, fast, but no guarantee of delivery
💡 Use case: Monitoring metrics, logs, or telemetry where occasional loss is acceptable.
2️⃣ At-Least Once
🔹 Messages are never lost
🔹 But they may be delivered multiple times
🔹 Systems must be able to deduplicate on the consumer side
💡 Use case: Order processing, notifications, analytics — where duplicates can be filtered or ignored.
3️⃣ Exactly Once
🔹 Each message is delivered only once, no duplicates, no loss
🔹 Sounds perfect — but it’s very hard to implement
🔹 Adds complexity, latency, and often performance trade-offs
💡 Use case: Financial transactions, trading systems, accounting — where idempotency is not supported and every operation must be precise.
🧠 So... Why Does It Matter?
Choosing the right delivery guarantee isn’t just about tech — it’s about your use case and business priorities.
Sometimes speed matters more than precision. Other times, a single duplicate message could cost thousands.
💭 Bonus Insight:
📌 Message Queue vs Event Streaming Platform?
🔹 Message Queues (like RabbitMQ, SQS): Focus on reliability and order for point-to-point communication.
🔹 Event Streaming Platforms (like Kafka, Pulsar): Optimized for broadcasting, storing, and replaying high-throughput event logs. Ideal for event-driven systems and real-time analytics.
What’s your go-to strategy for delivery semantics in distributed systems?
Let’s discuss in the comments 💬