Automated deployments are key to scaling any infrastructure—and voicebots are no exception. If you’re working with Amazon Connect and want to streamline the deployment of your AI Agents, you’ve come to the right place!

In this post, I will share WebRTC.ventures' best practices in automating the deployment of AI-powered voice assistants for Amazon Connect, moving beyond manual, click-by-click setups to a robust, scalable Infrastructure as Code (IaC) approach. We’ll explore how to manage both static and dynamic resources, leverage tools like Terraform and AWS Serverless Application Model (SAM), and even set up an automated deployment pipeline using Github Actions to deploy your conversational IVR solutions efficiently.

Whether you're looking to reduce deployment time, minimize errors, or scale your voice solutions across multiple regions, these automation techniques will transform how you deliver voice experiences.

Why Automating Deployment is Important

AWS provides a straightforward approach to create voice-based AI agents in Amazon Connect using the Management Console. With just a couple of clicks you can set up an Amazon Lex bot with all your customers' intents, easily pair it with an Amazon Connect Flow, and voila, your bot is ready to take some customer inquiries.

This method is useful for prototyping and experimentation, but it is not suitable for launching and maintaining such infrastructure at scale. This is due to the difficulty in replicating manual changes accurately across multiple accounts and environments, which can result in errors and inconsistencies.

Infrastructure as Code (IaC) provides an alternative approach for managing your automated voice interfaces in a way that is easy to not only replicate, but also apply controlled, versioned, changes to it as it grows.

“Static” vs “Dynamic” Resources

When defining the resources that support voicebots using IaC, we find it useful to split these into “static” and “dynamic” resources. Such distinctions help us to pick the right tool to manage them and also the best approach for it.

“Static” resources are those that don’t change too often, and when they do, such changes are managed outside and independently from the voicebot release cycle. The Amazon Connect instance itself is a great example of this. Static resources can live in their own code repository and follow its own release process, usually using IaC tools such as Terraform or AWS CloudFormation.

“Dynamic” resources are those that change frequently as part of the voicebot release cycle, either directly influencing such changes or as a result of these. Amazon Lex bots and any AWS Lambda function associated with them, are considered dynamic resources. Due to its relationship with voicebot code, it’s better for them to live in the same code repository, and follow its same release process, preferably using a higher level IaC framework such as AWS SAM or AWS Cloud Development Kit (CDK).

Typically, an Amazon Connect voicebot consist of the following static & dynamic resources:

  • Static:
    • Amazon Connect instances
    • EC2 instances or ECS/EKS clusters running supporting applications/services
    • S3 Buckets
    • Vector Databases
  • Dynamic
    • Amazon Connect Flows
    • Amazon Lex bots
    • AWS Lambda functions
    • IAM Roles and Policies
    • DynamoDB tables

Let’s see an example of how to provision a very basic voicebot for Amazon Connect using this approach.

A Basic Voicebot Workflow

In our example we feature an inbound voice-driven conversational agent that answers customer questions. The workflow of the voicebot is as follows:

  • There is a phone number that the customer calls
  • The phone call goes through a Flow in Amazon Connect
  • A Get Customer Input block sends the customer’s questions to an Amazon Lex bot
  • An AWS Lambda function fulfills key intents in Amazon Lex
  • These functions generate tailored responses by leveraging Large Language Models running on an external inference platform, and knowledge bases for enhanced context using RAG
  • The supporting resources are defined using an Infrastructure as Code approach using Terraform and AWS SAM, and deployed through a CI/CD Pipeline using Github Actions

This process is depicted below:

Overall flow of the call in Amazon Connect

Now let’s go through the steps for provisioning this voicebot.

Provisioning Static Resources Using Terraform

The first resource you need is the actual Amazon Connect instance. Since we don’t expect it to change that much—other than creating the required users and probably setting up phone numbers—we define it as a static resource and manage it using Terraform. For this basic example, this is the only static resource we need.

You can create an Amazon Connect instance using Terraform as shown below:

provider "aws" {
  region = "us-east-1"
}

resource "aws_connect_instance" "main" {
  identity_management_type = "CONNECT_MANAGED"
  inbound_calls_enabled = true
  instance_alias = "my-connect-instance"
  outbound_calls_enabled = true
  contact_flow_logs_enabled = true
  contact_lens_enabled = false

  tags = {
    "Name" = "my-voicebot"
  }
}

Then, after initializing the state and setting up a proper backend for it, it’s all matter of creating a plan and applying it as follows:

terraform plan -out plan.out
terraform apply plan.out

Updating Dynamic Resources On The Go Using SAM

AWS SAM is a nice tool for managing serverless resources. It allows you to define & test resources like Lambda functions and API Gateway endpoints locally, and deploy them using convenient commands. It works on top of CloudFormation, so you can easily extend it to provision virtually any resource you need.

Let’s see how to define and deploy the dynamic resources for our basic voice bot using SAM.

To provide natural conversation capabilities, we want to create a lambda function that leverages a Large Language Model, and perform RAG for generating appropriate responses. Creating such a function is out of the scope of this post, but you can learn more about it in our “Enhancing CX in Amazon Connect with Conversational Bots” post on WebRTC.ventures blog.

In SAM, you define Lambda functions using the AWS::Serverless::Function resource type. In such a resource you add properties such as the location of the function code, the entrypoint, runtime, memory, etc.

You can also set environment variables for connecting to external services. For example, if you’re using Pinecone as the vector database for RAG, you can add its API key here, or even better, reference it from AWS Secret Manager

In addition to the Lambda function, you also need an IAM Role and Policy to give the necessary permissions to it. These permissions should, at a minimum, include writing logs in CloudWatch. If you're using Amazon Bedrock as the inference platform, then permissions to invoke such models are also required.

# Lambda Function Definition
AgentFunction:
  Type: AWS::Serverless::Function
  Properties:
    FunctionName: agent-function
    CodeUri: agent/
    Role: !GetAtt AgentRole.Arn
    Timeout: 60
    MemorySize: 512
    Handler: app.lambda_handler
    Runtime: python3.12
    Architectures:
      - x86_64
    Environment:
      Variables:
        LOG_LEVEL: "INFO"
        # if doing RAG with Pinecone, add the API key as environment variable
        PINECONE_API_KEY: "{{resolve:secretsmanager:pc:SecretString:PINECONE_API_KEY}}"

# IAM Role for Lambda Function
AgentRole:
  Type: AWS::IAM::Role
  Properties:
    RoleName: agent-role
    AssumeRolePolicyDocument:
      Version: '2012-10-17'
      Statement:
        - Effect: Allow
          Action: sts:AssumeRole
          Principal:
            Service: lambda.amazonaws.com
    Policies:
      - PolicyName: AmazonBedrockAccessPolicy
        PolicyDocument:
          Version: '2012-10-17'
          Statement:
            - Effect: Allow
              Action:
                - bedrock:InvokeModel
                - bedrock:InvokeModelWithResponseStream
              Resource:
                - arn:aws:bedrock:*::foundation-model/*
                - !Sub "arn:aws:bedrock:${AWS::Region}:${AWS::AccountId}:inference-profile/*"

# Basic permissions for Lambda function
LoggingPolicy:
  Type: AWS::IAM::Policy
  Properties:
    PolicyName: logging-policy
    PolicyDocument:
      Version: '2012-10-17'
      Statement:
        - Effect: Allow
          Action:
            - logs:CreateLogGroup
            - logs:CreateLogStream
            - logs:PutLogEvents
          Resource: !Sub "arn:aws:logs:${AWS::Region}:${AWS::AccountId}:log-group:/aws/lambda/*:*"
    Roles:
      - !Ref AgentRole

Next, you need a Lex bot configured with the required intents. In this example we will add an AskAQuestion intent for when the client asks a question to the bot, a SayGoodbye one for ending the conversation, and a FallbackIntent for everything else.

We want the AskAQuestion and FallbackIntent intents to be fulfilled by the Lambda function we defined before, so let’s make sure to enable the FulfillmentCodeHook for these.

For the SayGoodbye intent, we simply add an IntentClosingSetting with a closing response to end the conversation.

The bot will also require permissions to interact with other AWS services, so let’s create an IAM role for it.

The configuration for the bot in SAM’s template.yaml file is as follows:

# IAM Role for the Amazon Lex bot
AgentBotRuntimeRole:
  Type: AWS::IAM::Role
  Properties:
    RoleName: agent-bot-runtime-role
    AssumeRolePolicyDocument:
      Version: '2012-10-17'
      Statement:
        - Effect: Allow
          Action: sts:AssumeRole
          Principal:
            Service: lexv2.amazonaws.com
    Policies:
      - PolicyName: LexRuntimePolicy
        PolicyDocument:
          Version: '2012-10-17'
          Statement:
            - Effect: Allow
              Action:
                - polly:SynthesizeSpeech
                - comprehend:DetectSentiment
              Resource: "*"

# Amazon Lex Bot definition
AgentBot:
  Type: AWS::Lex::Bot
  Properties:
    Name: agent-bot
    RoleArn: !GetAtt AgentBotRuntimeRole.Arn
    DataPrivacy:
      ChildDirected: false
    IdleSessionTTLInSeconds: 300
    Description:  "Bot that takes inputs from customer"
    AutoBuildBotLocales: true
    BotLocales:
      - LocaleId: "en_US"
        Description: "Interact with users in english"
        NluConfidenceThreshold: 0.45
        Intents:
          - Name: "AskAQuestion"
            Description: "Intent to ask the bot a question"
            SampleUtterances:
              - Utterance: "Can you help me with this?"
              - Utterance: "I have a question regarding the service"
              - Utterance: "I need help with something"
            FulfillmentCodeHook:
              Enabled: true
          - Name: "SayGoodbye"
            Description:  "Intent to say goodbye and end the conversation"
            SampleUtterances:
               - Utterance: "That's all, thank you"
               - Utterance: "Thank you, bye"
               - Utterance: "Bye bye"
               - Utterance: "Good bye"
            IntentClosingSetting:
              ClosingResponse:
                AllowInterrupt: false
                MessageGroupsList:
                  - Message: 
                      PlainTextMessage: 
                        Value: "My pleasure. Goodbye!"
                    Variations:
                    - PlainTextMessage: 
                        Value: "Happy to assist you. Bye!"
                    - PlainTextMessage: 
                        Value: "Call any time you need something. Have a nice day!"
          - Name: "FallbackIntent"
            Description: "Default intent when no other intent matches"
            ParentIntentSignature:  "AMAZON.FallbackIntent"
            FulfillmentCodeHook:
              Enabled: true

Now we need to associate the Lambda function with the Lex bot. To do so, we need to publish a bot version and assign an alias to it, then we can associate the alias with the function. We also need to create a Lambda policy to allow Lex to trigger the Lambda function.

# Publish an immutable version of the bot based on DRAFT version
AgentBotVersionOne:
  Type: AWS::Lex::BotVersion
  Properties:
    BotId: !Ref AgentBot
    BotVersionLocaleSpecification:
      - LocaleId: "en_US"
        BotVersionLocaleDetails:
          SourceBotVersion: DRAFT
    Description: "AgentBot Version"

# Associate an alias with the immutable version
AgentBotAlias:
  Type: AWS::Lex::BotAlias
  Properties:
    BotId: !Ref AgentBot
    BotAliasName: !Ref Environment
    BotVersion: !GetAtt AgentBotVersionOne.BotVersion
    Description: "Alias for AgentBot"
    BotAliasLocaleSettings:
      - LocaleId: "en_US"
        BotAliasLocaleSetting:
          Enabled: true
          CodeHookSpecification:
            LambdaCodeHook:
              LambdaArn: !GetAtt AgentFunction.Arn
              CodeHookInterfaceVersion: '1.0'

# Associate the alias and the function
AgentBotLambdaPermission:
  Type: AWS::Lambda::Permission
  Properties:
    Action: lambda:InvokeFunction
    FunctionName: !Ref AgentFunction
    Principal: lexv2.amazonaws.com
    SourceAccount: !Ref AWS::AccountId
    SourceArn: !GetAtt AgentBotAlias.Arn

Finally, we create a Flow in Amazon Connect that we can later associate with a claimed phone number to start receiving calls. To create a flow using SAM we use the AWS::Connect::ContactFlow and use the Flow language to define its blocks. For our example we’re particularly interested in the ConnectParticipantWithLexBot action.

We also need to associate the Lex bot with the instance.

# Associate the Lex bot with the Flow
IntegrationAssociation:
  Type: AWS::Connect::IntegrationAssociation
  Properties:
    InstanceId: !Sub "arn:aws:connect:${AWS::Region}:${AWS::AccountId}:instance/"
    IntegrationType: LEX_BOT
    IntegrationArn: !GetAtt AgentBotAlias.Arn

# Definition of the Flow
SampleContactFlow:
    Type: AWS::Connect::ContactFlow
    Properties:
      Name: sample-contact-flow
      Type: CONTACT_FLOW
      Description: A contact flow that answers customer questions
      InstanceArn: !Sub 'arn:aws:connect:${AWS::Region}:${AWS::AccountId}:instance/'
      Content: !Sub 
        - |
          {
            ...
            "Actions": [
              ...
              {
                "Parameters": {
                  "Text": "Hey there! Thank you for calling. How can I help you?",
                  "LexV2Bot"  : {
                    "AliasArn": "${BotAliasArn}"
                  },
                  "LexSessionAttributes": {
                    "x-amz-lex:allow-interrupt:*:*": "True"
                  }
                },
                "Identifier": "9bbfdcb2-96fc-41e4-856b-d0661c774265",
                "Type": "ConnectParticipantWithLexBot",
                "Transitions": {
                  ...
                }
              },
              ...
            ]
          }
        - { BotAliasArn: !GetAtt AgentBotAlias.Arn }

With these resources in place, it’s only a matter of building the artifacts and applying the Change Set in CloudFormation. The cool thing about SAM is that it abstracts this process in two simple commands:

sam build --use-container
sam deploy --guided

Or even better, create a job in Github Actions and deploy these changes automatically on each push to a specific branch.

name: Deployment

env:
  AWS_REGION: "us-east-1"

on:
  push:
    branches:
      - main

jobs:
  deploy:
    runs-on: ubuntu-latest
    steps:
      - name: Checkout
        uses: actions/checkout@v3
      - name: Set up SAM
        uses: aws-actions/setup-sam@v2
      - name: Set up Credentials
        uses: aws-actions/configure-aws-credentials@v1
        with:
          aws-access-key-id: ${{ secrets.AWS_ACCESS_KEY }}
          aws-secret-access-key: ${{ secrets.AWS_SECRET_KEY }}
          aws-region: ${{ env.AWS_REGION }}
      - name: Build
        run: sam build --use-container
      - name: Deploy
        run: sam deploy --no-confirm-changeset --no-fail-on-empty-changeset

Wrapping Up the Deployment Pipeline

By embracing Infrastructure as Code and strategically dividing resources into static and dynamic categories, you can effectively streamline and automate the deployment of voicebots for Amazon Connect.

Using Terraform for foundational resources and AWS SAM for dynamic, frequent changing components, we establish a robust and scalable system. Integrating CI/CD pipelines with Github Actions further automates the deployment process, ensuring consistency and efficiency across environments.

This approach not only simplifies management but also accelerates development cycles, allowing you to focus on enhancing the conversational experience for your customers.

Ready to take your voice AI implementation to the next level? At WebRTC.ventures, we specialize in building and deploying scalable, AI-powered real-time communication solutions. Our team of experts can guide you through every step of the process, from initial design to automated deployment and ongoing maintenance. Contact us today to discuss your AI Agent needs and discover how we can transform your business. Let’s make it live!