Hi everyone,
In this article, I’d like to show you how to use the Actor pattern in an AWS environment. It’s possible that you’re already using the Actor Model without realizing that this approach actually has a name—the Actor.

What is the Actor?

I’m absolutely sure I couldn’t explain it better than the person who invented and formalized the Actor Model, which is why I highly recommend watching this video.

Let me just add a quick recap and show you how to implement the Actor Model in AWS.

The Actor Model is a conceptual model used in computer science to handle concurrent computation. In this model, an actor is a computational entity that, in response to receiving a message, can:

  • Perform a task (e.g., process data or make a decision),

  • Send messages to other actors,

  • Create new actors, and

  • Update its internal state.

Each actor operates independently and communicates only by message passing, which helps avoid issues related to shared memory and makes systems easier to scale and reason about. Let me rephrase it use simply words.

Actor

  1. Lightweight and it is easy to create thousand of them
  2. Has own state
  3. Has own mailbox (queue)
  4. Can communicate with each other only through messages
  5. Messages are processed in FIFO order
  6. Process only one message at a time
  7. Decoupled

In AWS, you can implement the Actor pattern using services like:

  • AWS Lambda (as individual actors)
  • Amazon SQS or SNS (for message passing)
  • Step Functions or EventBridge (for orchestration)
  • DynamoDb (as a storage/state)

It might seem like the Actor Model is simply a combination of a queue service, a Lambda function triggered by that queue, and a DynamoDB table — and you'd be mostly right. However, there are some subtle nuances, and I’d like to walk you through them.

Requirements

A typical use case well-suited for the Actor Model is a shopping cart in an e-commerce application. You'll easily come across this example in many articles, which is why I'll provide some slightly different ones instead.

Let's imagine a fintech application where a user can deposit money into their own account or transfer it to another account. In the fintech world, these operations are referred to as transactions, and they typically fall into two categories: deposit and transfer.

Another example is a store where products can arrive and depart at any time. To prevent the inventory from dropping below zero, we must ensure that all arrivals are processed before any departures.

🎯 I’ll use these two scenarios to demonstrate how to implement a proper, scalable and from ever going below zero solution using AWS resources and the Actor Model.

Now we do have a lot to cover so let's jump in and get started straight away

Solution

Initial Approach: Limited Scalability
We can split our transactions into two batches: the first containing only deposit transactions, and the second containing transfer transactions. These batches will then be sent to a FIFO queue for processing. Using the same messageGroupId for all messages, to help us to process the transaction in correct order and cover the scenario where a user deposits money before transferring it to another user.

With products the same idea first we will process all arrivals and only then departures.

Let’s take a visual look at how our flow works:

Diagram of a non-scalable implementation

Imagine that Lambda 1 has already sorted and filtered the products/transactions and is sending them one by one to a queue. Lambda 2 is triggered by the queue and processes each message individually, storing useful information—such as the quantity of products or the cash balance of clients—in a DynamoDB table. The bottle neck of this approach is the process lambda because it is always only one instance.

Scalable Solution Using the Actor Model
To solve this issue, the Actor pattern comes in handy. Before I describe the solution, I’d like to stress out again that every Actor must have its own state, queue and perform a task. In the non-scalable solution, the processing Lambda function has all these characteristics and may look like an Actor—but what is the state of this Actor? You’ll probably say it's DynamoDB, and you’d be right—but that's a shared state for the entire application. What I’d like to do instead is break it down into the smallest possible pieces. In this case, it could be either the clientId or productId, couldn’t it?

So, this will help us solve our task—let’s take a look at it visually:

Actor implementation

So, let's break down the actor implementation:

  1. The first Lambda function already has the sorted list of products or transactions and sends them one by one to a FIFO queue, using messageGroupId as either the clientId or productId.

  2. The queue triggers multiple instances of the processing Lambda function, creating them as needed. Each instance processes messages grouped by messageGroupId, effectively isolating the flow per client or product.

MessageGroupId in FIFO SQS:

  • Guarantees strict ordering within the same MessageGroupId.
  • Only one Lambda will process one message group at a time.
  • If two messages have the same MessageGroupId, they are processed one-by-one, in order.
  • If two messages have different MessageGroupIds (e.g., client-1, client-2), they can be processed concurrently by separate Lambda invocations.

As a result, we have a scalable solution where each processing Lambda function fits to the three core characteristics of the Actor Model.

💡 As a bonus if a message fails processing (Lambda error), SQS will not deliver the next message in that group until the failed one is successfully handled or moved to a DLQ.

Recap

🚀 The Actor Model is a powerful pattern for building scalable, concurrent systems—and AWS gives us all the tools to implement it effectively. By assigning state and queues per actor (e.g., per client or product), and combining Lambda with FIFO SQS queues, we can create robust, fault-tolerant systems that scale effortlessly.

Also, don’t forget that every AWS account has Lambda quotas—especially regarding concurrent executions. I recommend properly configuring these limits before going to production.

If you’d like to support my work, you can subscribe, give me kudos, or share your valuable feedback. Your support and insights mean a lot!