When running AWS EC2 machines, having a reliable and repeatable way to build and update Amazon Machine Images (AMIs) is crucial. The AWS Image Builder service allows for AMI creation to be defined and automated.
This blog post walks through setting up an AWS Image Builder pipeline using Terraform, that produces a minimal AMI for CI/CD based on the latest Ubuntu image.
Use Case
A simple use case which I encountered for custom AMIs is CI/CD. For example, the GitLab CI/CD pipelines Docker+Machine executor is a powerful tool for running CI/CD jobs in containers on dynamically provisioned AWS EC2 instances. The executor provides scalable, ephemeral build environments, with AWS EC2 instances automatically created, scaled, and terminated as needed.
Each new AWS EC2 instance requires Docker and Bash to be installed to execute CI/CD jobs. Using a custom pre-built AMI to manage these dependencies offers multiple advantages:
- Faster more efficient job execution - A custom AMI can include Docker and other necessary dependencies, reducing time & potential errors from installing software on startup, leading to a reduced overall CI/CD job duration.
- Security and compliance - Regularly updated AMIs ensure the latest security patches are applied.
- Consistency - All instances have identical configurations on boot.
Automating the AMI creation with AWS Image Builder and Terraform provides a version-controlled, declarative, AWS-managed solution.
Sample Repository
You can find a sample repository here, which contains the full terraform configuration and helper scripts for the AWS Image Builder setup outlined in the rest of the article.
Prerequisites
The following dependencies are required:
- Terraform
>=1.10.5
(tfenv can be useful) - An AWS account
- AWS CLI configured with appropriate IAM permissions for Terraform and Image Builder
- A VPC with subnets and security groups to host the image build process (the sample repository will automatically create these)
Configuration
We’ll define an AWS Image Builder pipeline with Terraform that:
- Uses the latest official Ubuntu AMI as a base.
- Installs Docker and Bash.
- Produces a versioned AMI ready for CI/CD.
- Distributes AMI for use.
Step 1 - Define the Components
Components define versioned install steps for an AMI. Define a simple shell script that installs and updates the required CI/CD dependencies, then validates and tests the install:
resource "aws_imagebuilder_component" "update_dependencies" {
#checkov:skip=CKV_AWS_180:Ensure Image Builder component is encrypted by KMS using a customer managed Key (CMK)
name = "UpdateDeps"
platform = "Linux"
version = local.recipe_version
data = yamlencode({
name = "UpdateDeps"
schemaVersion = "1.0"
phases = [
{
name = "build"
steps = [{
name = "UpdateDeps"
action = "ExecuteBash"
inputs = {
commands = [
"sudo apt update",
"sudo apt upgrade --yes",
"sudo apt install --yes bash git ca-certificates"
]
}
}]
},
{
name = "validate"
steps = [{
name = "ValidateGitVersion"
action = "ExecuteBash"
inputs = {
commands = [
"git --version"
]
}
}]
}
]
})
}
resource "aws_imagebuilder_component" "install_docker" {
#checkov:skip=CKV_AWS_180:Ensure Image Builder component is encrypted by KMS using a customer managed Key (CMK)
name = "InstallDocker"
platform = "Linux"
version = local.recipe_version
data = yamlencode({
name = "InstallDocker"
schemaVersion = "1.0"
phases = [
{
name = "build"
steps = [
{
name = "InstallDocker"
action = "ExecuteBash"
inputs = {
commands = [
"sudo apt-get update",
"sudo apt-get install -y apt-transport-https ca-certificates curl gnupg lsb-release",
"curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo gpg --dearmor -o /usr/share/keyrings/docker-archive-keyring.gpg",
"echo \"deb [arch=$(dpkg --print-architecture) signed-by=/usr/share/keyrings/docker-archive-keyring.gpg] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable\" | sudo tee /etc/apt/sources.list.d/docker.list > /dev/null",
"sudo apt-get update",
"sudo apt-get install --yes docker-ce docker-ce-cli containerd.io"
]
}
},
{
name = "PostInstallDocker"
action = "ExecuteBash"
inputs = {
commands = [
"sudo usermod -aG docker ubuntu",
"sudo systemctl enable docker",
"sudo systemctl start docker"
]
}
}
]
},
{
name = "validate"
steps = [{
name = "ValidateGitVersion"
action = "ExecuteBash"
inputs = {
commands = [
"docker --version"
]
}
}]
},
{
name = "test"
steps = [
{
name = "TestDockerHelloWorld"
action = "ExecuteBash"
inputs = {
commands = [
"docker run hello-world"
]
}
}
]
}
]
})
}
Step 2 - Define the Recipe
An image recipe defines the steps required to build the AMI. Create a recipe that uses the latest Ubuntu AMI and applies our custom components:
locals {
recipe_version = "1.0.0"
}
resource "aws_imagebuilder_image_recipe" "image_builder" {
component {
component_arn = aws_imagebuilder_component.update_dependencies.arn
}
component {
component_arn = aws_imagebuilder_component.install_docker.arn
}
component {
component_arn = aws_imagebuilder_component.test_docker.arn
}
name = "${var.project_name}-${var.environment}"
parent_image = "arn:aws:imagebuilder:${data.aws_region.current.name}:aws:image/${var.base_image}/x.x.x"
version = local.recipe_version
lifecycle {
create_before_destroy = true
}
}
The base image variable is set to the latest Ubuntu LTS, and by using the version number x.x.x
the latest version is selected:
variable "base_image" {
type = string
description = "Base image to build from"
default = "ubuntu-server-24-lts-x86"
}
Step 3 - Define the Infrastructure
Add an infrastructure configuration to define where the image is built and with what configuration:
resource "aws_imagebuilder_infrastructure_configuration" "image_builder" {
name = "${var.project_name}-${var.environment}"
description = "Infrastructure configuration"
instance_profile_name = aws_iam_instance_profile.image_builder.name
instance_types = ["t3.medium"]
security_group_ids = [aws_security_group.image_builder.id]
subnet_id = var.build_subnet_id
terminate_instance_on_failure = true
logging {
s3_logs {
s3_bucket_name = aws_s3_bucket.image_builder.bucket
s3_key_prefix = "build-logs"
}
}
}
See the sample project files for the following required config:
- Instance profile – Grants the necessary IAM permissions for the EC2 instance to run Image Builder tasks.
- Security group – Defines network access rules to allow the EC2 instance to fetch updates and install dependencies.
- AWS S3 – Stores build logs and artifacts for auditing and debugging AMI Image Builder tasks.
Step 4 - Define AMI Distribution
Add a distribution configuration to define which regions the built AMIs are to be distributed to and who can launch instances with it.
resource "aws_imagebuilder_distribution_configuration" "image_builder" {
#checkov:skip=CKV_AWS_199:Ensure Image Builder Distribution Configuration encrypts AMI's using KMS - a customer managed Key (CMK)
name = "${var.project_name}-${var.environment}"
description = "Multi-region distribution"
distribution {
region = data.aws_region.current.name
ami_distribution_configuration {
ami_tags = {
project = var.project_name
environment = var.environment
ami_ref = "${var.project_name}-${var.environment}"
}
name = "${var.project_name}-${var.environment}-${var.base_image}-{{ imagebuilder:buildDate }}"
launch_permission {
user_ids = [data.aws_caller_identity.current.account_id]
}
}
}
}
The image is tagged with the project name and environment to allow for easier lookup and consumption.
Step 5 - Define and Trigger the Pipeline
Add an image pipeline to automate the AMI creation based on a schedule or trigger:
resource "aws_imagebuilder_image_pipeline" "image_builder" {
name = "${var.project_name}-${var.environment}"
status = "ENABLED"
image_recipe_arn = aws_imagebuilder_image_recipe.image_builder.arn
infrastructure_configuration_arn = aws_imagebuilder_infrastructure_configuration.image_builder.arn
distribution_configuration_arn = aws_imagebuilder_distribution_configuration.image_builder.arn
}
While the pipeline can be triggered manually or via other AWS events, in this case, a CloudWatch Event Rule is used to trigger AMI builds at regular intervals - 9am daily:
resource "aws_cloudwatch_event_rule" "image_builder" {
name = "${var.project_name}-${var.environment}"
description = "Invokes ${aws_imagebuilder_image_pipeline.image_builder.name} image builder pipeline"
schedule_expression = "cron(0/20 * * * ? *)"
}
resource "aws_cloudwatch_event_target" "image_builder" {
rule = aws_cloudwatch_event_rule.image_builder.name
arn = aws_imagebuilder_image_pipeline.image_builder.arn
role_arn = aws_iam_role.image_builder_cron.arn
}
data "aws_iam_policy_document" "image_builder_cron" {
statement {
actions = ["imagebuilder:StartImagePipelineExecution"]
resources = [aws_imagebuilder_image_pipeline.image_builder.arn]
}
}
resource "aws_iam_role" "image_builder_cron" {
assume_role_policy = jsonencode({
Version = "2012-10-17"
Statement = [{
Action = "sts:AssumeRole"
Effect = "Allow"
Principal = {
Service = "events.amazonaws.com"
}
}]
})
}
resource "aws_iam_policy" "image_builder_cron" {
description = "${var.project_name}-${var.environment} cron policy"
policy = data.aws_iam_policy_document.image_builder_cron.json
}
resource "aws_iam_role_policy_attachment" "image_builder_cron" {
role = aws_iam_role.image_builder_cron.name
policy_arn = aws_iam_policy.image_builder_cron.arn
}
Step 6 - AMI Lifecycle Management
Automatically delete all but the last 3 latest AMIs to manage storage costs:
resource "aws_imagebuilder_lifecycle_policy" "image_builder_lifecycle" {
name = "${var.project_name}-${var.environment}"
description = "Delete old images, keep latest 3"
execution_role = aws_iam_role.image_builder_lifecycle.arn
resource_type = "AMI_IMAGE"
policy_detail {
action {
type = "DELETE"
}
filter {
type = "COUNT"
value = 3
}
}
resource_selection {
tag_map = {
ami_ref = "${var.project_name}-${var.environment}"
}
}
depends_on = [aws_iam_role_policy_attachment.image_builder_lifecycle]
}
resource "aws_iam_role" "image_builder_lifecycle" {
assume_role_policy = jsonencode({
Version = "2012-10-17"
Statement = [{
Action = "sts:AssumeRole"
Effect = "Allow"
Principal = {
Service = "imagebuilder.amazonaws.com"
}
}]
})
}
resource "aws_iam_role_policy_attachment" "image_builder_lifecycle" {
policy_arn = "arn:aws:iam::aws:policy/service-role/EC2ImageBuilderLifecycleExecutionPolicy"
role = aws_iam_role.image_builder_lifecycle.name
}
Step 7 - Deploy with Terraform
In the Terraform project directory, initialise Terraform and apply the configuration:
terraform init
terraform apply
Step 8 - Using the AMI
Once the Image Builder pipeline is triggered, an image will be built. Run an EC2 instance from the image and test the installation, e.g.

Alternatively, for the CI/CD use case, lookup the AMI ID via tags & inject into the GitLab Runner config.toml
under MachineOptions
. Depending on how the GitLab runner is configured, this can be done via a scheduled CI/CD pipeline or other job runner. e.g. if the runner is defined in terraform, the template could be updated:
data "aws_ami" "image_builder_ci_cd" {
most_recent = true
owners = ["self"]
filter {
name = "tag:ami_ref"
values = ["${var.project_name}-${var.environment}"]
}
}
locals {
machine_options = [
"amazonec2-region=${data.aws_region.current.name}",
"amazonec2-instance-type=t3.small",
"amazonec2-ami=${data.aws_ami.image_builder_ci_cd.id}"
]
}
resource "template_file" "gitlab_runner_config" {
template = <<EOT
global:
# other config...
[[runners]]
# other config...
executor = "docker+machine"
[runners.machine]
# other config...
MachineDriver = "amazonec2"
MachineOptions = [
%{for option in local.machine_options} "${option}", %{endfor}
]
EOT
}
Conclusion
By leveraging AWS Image Builder and Terraform, we have created a scalable, cost-effective, and automated solution for creating AMIs on schedule. For the CI/CD GitLab runner use case, this pipeline ensures that the environment remains consistently configured while improving job duration and security with a pre-configured AMI.
Some further improvements may be:
- Automated security scanning – Integrate AWS Inspector or third-party tools to scan AMIs for vulnerabilities before deployment.
- Multi-region AMI replication – Distribute AMIs across AWS regions to support disaster recovery and reduce latency.
- Notification and monitoring – Use AWS SNS / configure integrations for Slack to notify teams when new AMIs are built or if a build fails.
Thanks for reading! I hope this article was helpful.
If you have any questions or suggestions, feel free to reach out via the contact section of this site or on LinkedIn 🚀