Home

Awesome

AWS Lambda Terraform module

Terraform module, which creates almost all supported AWS Lambda resources as well as taking care of building and packaging of required Lambda dependencies for functions and layers.

SWUbanner

This Terraform module is the part of serverless.tf framework, which aims to simplify all operations when working with the serverless in Terraform:

  1. Build and install dependencies - read more. Requires Python 3.6 or newer.
  2. Create, store, and use deployment packages - read more.
  3. Create, update, and publish AWS Lambda Function and Lambda Layer - see usage.
  4. Create static and dynamic aliases for AWS Lambda Function - see usage, see modules/alias.
  5. Do complex deployments (eg, rolling, canary, rollbacks, triggers) - read more, see modules/deploy.
  6. Use AWS SAM CLI to test Lambda Function - read more.

Features

Usage

Lambda Function (store package locally)

module "lambda_function" {
  source = "terraform-aws-modules/lambda/aws"

  function_name = "my-lambda1"
  description   = "My awesome lambda function"
  handler       = "index.lambda_handler"
  runtime       = "python3.12"

  source_path = "../src/lambda-function1"

  tags = {
    Name = "my-lambda1"
  }
}

Lambda Function and Lambda Layer (store packages on S3)

module "lambda_function" {
  source = "terraform-aws-modules/lambda/aws"

  function_name = "lambda-with-layer"
  description   = "My awesome lambda function"
  handler       = "index.lambda_handler"
  runtime       = "python3.12"
  publish       = true

  source_path = "../src/lambda-function1"

  store_on_s3 = true
  s3_bucket   = "my-bucket-id-with-lambda-builds"

  layers = [
    module.lambda_layer_s3.lambda_layer_arn,
  ]

  environment_variables = {
    Serverless = "Terraform"
  }

  tags = {
    Module = "lambda-with-layer"
  }
}

module "lambda_layer_s3" {
  source = "terraform-aws-modules/lambda/aws"

  create_layer = true

  layer_name          = "lambda-layer-s3"
  description         = "My amazing lambda layer (deployed from S3)"
  compatible_runtimes = ["python3.12"]

  source_path = "../src/lambda-layer"

  store_on_s3 = true
  s3_bucket   = "my-bucket-id-with-lambda-builds"
}

Lambda Functions with existing package (prebuilt) stored locally

module "lambda_function_existing_package_local" {
  source = "terraform-aws-modules/lambda/aws"

  function_name = "my-lambda-existing-package-local"
  description   = "My awesome lambda function"
  handler       = "index.lambda_handler"
  runtime       = "python3.12"

  create_package         = false
  local_existing_package = "../existing_package.zip"
}

Lambda Function or Lambda Layer with the deployable artifact maintained separately from the infrastructure

If you want to manage function code and infrastructure resources (such as IAM permissions, policies, events, etc) in separate flows (e.g., different repositories, teams, CI/CD pipelines).

Disable source code tracking to turn off deployments (and rollbacks) using the module by setting ignore_source_code_hash = true and deploy a dummy function.

When the infrastructure and the dummy function is deployed, you can use external tool to update the source code of the function (eg, using AWS CLI) and keep using this module via Terraform to manage the infrastructure.

Be aware that changes in local_existing_package value may trigger deployment via Terraform.

module "lambda_function_externally_managed_package" {
  source = "terraform-aws-modules/lambda/aws"

  function_name = "my-lambda-externally-managed-package"
  description   = "My lambda function code is deployed separately"
  handler       = "index.lambda_handler"
  runtime       = "python3.12"

  create_package         = false
  local_existing_package = "./lambda_functions/code.zip"

  ignore_source_code_hash = true
}

Lambda Function with existing package (prebuilt) stored in S3 bucket

Note that this module does not copy prebuilt packages into S3 bucket. This module can only store packages it builds locally and in S3 bucket.

locals {
  my_function_source = "../path/to/package.zip"
}

resource "aws_s3_bucket" "builds" {
  bucket = "my-builds"
  acl    = "private"
}

resource "aws_s3_object" "my_function" {
  bucket = aws_s3_bucket.builds.id
  key    = "${filemd5(local.my_function_source)}.zip"
  source = local.my_function_source
}

module "lambda_function_existing_package_s3" {
  source = "terraform-aws-modules/lambda/aws"

  function_name = "my-lambda-existing-package-local"
  description   = "My awesome lambda function"
  handler       = "index.lambda_handler"
  runtime       = "python3.12"

  create_package      = false
  s3_existing_package = {
    bucket = aws_s3_bucket.builds.id
    key    = aws_s3_object.my_function.id
  }
}

Lambda Functions from Container Image stored on AWS ECR

module "lambda_function_container_image" {
  source = "terraform-aws-modules/lambda/aws"

  function_name = "my-lambda-existing-package-local"
  description   = "My awesome lambda function"

  create_package = false

  image_uri    = "132367819851.dkr.ecr.eu-west-1.amazonaws.com/complete-cow:1.0"
  package_type = "Image"
}

Lambda Layers (store packages locally and on S3)

module "lambda_layer_local" {
  source = "terraform-aws-modules/lambda/aws"

  create_layer = true

  layer_name          = "my-layer-local"
  description         = "My amazing lambda layer (deployed from local)"
  compatible_runtimes = ["python3.12"]

  source_path = "../fixtures/python-app1"
}

module "lambda_layer_s3" {
  source = "terraform-aws-modules/lambda/aws"

  create_layer = true

  layer_name          = "my-layer-s3"
  description         = "My amazing lambda layer (deployed from S3)"
  compatible_runtimes = ["python3.12"]

  source_path = "../fixtures/python-app1"

  store_on_s3 = true
  s3_bucket   = "my-bucket-id-with-lambda-builds"
}

Lambda@Edge

Make sure, you deploy Lambda@Edge functions into US East (N. Virginia) region (us-east-1). See Requirements and Restrictions on Lambda Functions.

module "lambda_at_edge" {
  source = "terraform-aws-modules/lambda/aws"

  lambda_at_edge = true

  function_name = "my-lambda-at-edge"
  description   = "My awesome lambda@edge function"
  handler       = "index.lambda_handler"
  runtime       = "python3.12"

  source_path = "../fixtures/python-app1"

  tags = {
    Module = "lambda-at-edge"
  }
}

Lambda Function in VPC

module "lambda_function_in_vpc" {
  source = "terraform-aws-modules/lambda/aws"

  function_name = "my-lambda-in-vpc"
  description   = "My awesome lambda function"
  handler       = "index.lambda_handler"
  runtime       = "python3.12"

  source_path = "../fixtures/python-app1"

  vpc_subnet_ids         = module.vpc.intra_subnets
  vpc_security_group_ids = [module.vpc.default_security_group_id]
  attach_network_policy = true
}

module "vpc" {
  source = "terraform-aws-modules/vpc/aws"

  name = "my-vpc"
  cidr = "10.10.0.0/16"

  # Specify at least one of: intra_subnets, private_subnets, or public_subnets
  azs           = ["eu-west-1a", "eu-west-1b", "eu-west-1c"]
  intra_subnets = ["10.10.101.0/24", "10.10.102.0/24", "10.10.103.0/24"]
}

Additional IAM policies for Lambda Functions

There are 6 supported ways to attach IAM policies to IAM role used by Lambda Function:

  1. policy_json - JSON string or heredoc, when attach_policy_json = true.
  2. policy_jsons - List of JSON strings or heredoc, when attach_policy_jsons = true and number_of_policy_jsons > 0.
  3. policy - ARN of existing IAM policy, when attach_policy = true.
  4. policies - List of ARNs of existing IAM policies, when attach_policies = true and number_of_policies > 0.
  5. policy_statements - Map of maps to define IAM statements which will be generated as IAM policy. Requires attach_policy_statements = true. See examples/complete for more information.
  6. assume_role_policy_statements - Map of maps to define IAM statements which will be generated as IAM policy for assuming Lambda Function role (trust relationship). See examples/complete for more information.

Lambda Permissions for allowed triggers

Lambda Permissions should be specified to allow certain resources to invoke Lambda Function.

module "lambda_function" {
  source = "terraform-aws-modules/lambda/aws"

  # ...omitted for brevity

  allowed_triggers = {
    Config = {
      principal        = "config.amazonaws.com"
      principal_org_id = "o-abcdefghij"
    }
    APIGatewayAny = {
      service    = "apigateway"
      source_arn = "arn:aws:execute-api:eu-west-1:135367859851:aqnku8akd0/*/*/*"
    },
    APIGatewayDevPost = {
      service    = "apigateway"
      source_arn = "arn:aws:execute-api:eu-west-1:135367859851:aqnku8akd0/dev/POST/*"
    },
    OneRule = {
      principal  = "events.amazonaws.com"
      source_arn = "arn:aws:events:eu-west-1:135367859851:rule/RunDaily"
    }
  }
}

Conditional creation

Sometimes you need to have a way to create resources conditionally but Terraform does not allow usage of count inside module block, so the solution is to specify create arguments.

module "lambda" {
  source = "terraform-aws-modules/lambda/aws"

  create = false # to disable all resources

  create_package  = false  # to control build package process
  create_function = false  # to control creation of the Lambda Function and related resources
  create_layer    = false  # to control creation of the Lambda Layer and related resources
  create_role     = false  # to control creation of the IAM role and policies required for Lambda Function

  attach_cloudwatch_logs_policy = false
  attach_dead_letter_policy     = false
  attach_network_policy         = false
  attach_tracing_policy         = false
  attach_async_event_policy     = false

  # ... omitted
}

<a name="build-package"></a> How does building and packaging work?

This is one of the most complicated part done by the module and normally you don't have to know internals.

package.py is Python script which does it. Make sure, Python 3.6 or newer is installed. The main functions of the script are to generate a filename of zip-archive based on the content of the files, verify if zip-archive has been already created, and create zip-archive only when it is necessary (during apply, not plan).

Hash of zip-archive created with the same content of the files is always identical which prevents unnecessary force-updates of the Lambda resources unless content modifies. If you need to have different filenames for the same content you can specify extra string argument hash_extra.

When calling this module multiple times in one execution to create packages with the same source_path, zip-archives will be corrupted due to concurrent writes into the same file. There are two solutions - set different values for hash_extra to create different archives, or create package once outside (using this module) and then pass local_existing_package argument to create other Lambda resources.

<a name="debug"></a> Debug

Building and packaging has been historically hard to debug (especially with Terraform), so we made an effort to make it easier for user to see debug info. There are 3 different debug levels: DEBUG - to see only what is happening during planning phase and how a zip file content filtering in case of applied patterns, DEBUG2 - to see more logging output, DEBUG3 - to see all logging values, DUMP_ENV - to see all logging values and env variables (be careful sharing your env variables as they may contain secrets!).

User can specify debug level like this:

export TF_LAMBDA_PACKAGE_LOG_LEVEL=DEBUG2
terraform apply

User can enable comments in heredoc strings in patterns which can be helpful in some situations. To do this set this environment variable:

export TF_LAMBDA_PACKAGE_PATTERN_COMMENTS=true
terraform apply

<a name="build"></a> Build Dependencies

You can specify source_path in a variety of ways to achieve desired flexibility when building deployment packages locally or in Docker. You can use absolute or relative paths. If you have placed terraform files in subdirectories, note that relative paths are specified from the directory where terraform plan is run and not the location of your terraform file.

Note that, when building locally, files are not copying anywhere from the source directories when making packages, we use fast Python regular expressions to find matching files and directories, which makes packaging very fast and easy to understand.

Simple build from single directory

When source_path is set to a string, the content of that path will be used to create deployment package as-is:

source_path = "src/function1"

Static build from multiple source directories

When source_path is set to a list of directories the content of each will be taken and one archive will be created.

Combine various options for extreme flexibility

This is the most complete way of creating a deployment package from multiple sources with multiple dependencies. This example is showing some of the available options (see examples/build-package for more):

source_path = [
  "src/main-source",
  "src/another-source/index.py",
  {
    path     = "src/function1-dep",
    patterns = [
      "!.*/.*\\.txt", # Skip all txt files recursively
    ]
  }, {
    path             = "src/python-app1",
    pip_requirements = true,
    pip_tmp_dir      = "/tmp/dir/location"
    prefix_in_zip    = "foo/bar1",
  }, {
    path             = "src/python-app2",
    pip_requirements = "requirements-large.txt",
    patterns = [
      "!vendor/colorful-0.5.4.dist-info/RECORD",
      "!vendor/colorful-.+.dist-info/.*",
      "!vendor/colorful/__pycache__/?.*",
    ]
  }, {
    path             = "src/nodejs14.x-app1",
    npm_requirements = true,
    npm_tmp_dir      = "/tmp/dir/location"
    prefix_in_zip    = "foo/bar1",
  }, {
    path     = "src/python-app3",
    commands = [
      "npm install",
      ":zip"
    ],
    patterns = [
      "!.*/.*\\.txt",    # Skip all txt files recursively
      "node_modules/.+", # Include all node_modules
    ],
  }, {
    path     = "src/python-app3",
    commands = ["go build"],
    patterns = <<END
      bin/.*
      abc/def/.*
    END
  }
]

Few notes:

    !.*/.*\.txt        # Filter all txt files recursively
    node_modules/.*    # Include empty dir or with a content if it exists
    node_modules/.+    # Include full non empty node_modules dir with its content
    node_modules/      # Include node_modules itself without its content
                       # It's also a way to include an empty dir if it exists
    node_modules       # Include a file or an existing dir only

    !abc/.*            # Filter out everything in an abc folder
    abc/def/.*         # Re-include everything in abc/def sub folder
    !abc/def/hgk/.*    # Filter out again in abc/def/hgk sub folder

Building in Docker

If your Lambda Function or Layer uses some dependencies you can build them in Docker and have them included into deployment package. Here is how you can do it:

build_in_docker   = true
docker_file       = "src/python-app1/docker/Dockerfile"
docker_build_root = "src/python-app1/docker"
docker_image      = "public.ecr.aws/sam/build-python"
runtime           = "python3.12"    # Setting runtime is required when building package in Docker and Lambda Layer resource.

Using this module you can install dependencies from private hosts. To do this, you need for forward SSH agent:

docker_with_ssh_agent = true

Note that by default, the docker_image used comes from the registry public.ecr.aws/sam/, and will be based on the runtime that you specify. In other words, if you specify a runtime of python3.12 and do not specify docker_image, then the docker_image will resolve to public.ecr.aws/sam/build-python3.12. This ensures that by default the runtime is available in the docker container.

If you override docker_image, be sure to keep the image in sync with your runtime. During the plan phase, when using docker, there is no check that the runtime is available to build the package. That means that if you use an image that does not have the runtime, the plan will still succeed, but then the apply will fail.

Passing additional Docker options

To add flexibility when building in docker, you can pass any number of additional options that you require (see Docker run reference for available options):

  docker_additional_options = [
        "-e", "MY_ENV_VAR='My environment variable value'",
        "-v", "/local:/docker-vol",
  ]

Overriding Docker Entrypoint

To override the docker entrypoint when building in docker, set docker_entrypoint:

  docker_entrypoint = "/entrypoint/entrypoint.sh"

The entrypoint must map to a path within your container, so you need to either build your own image that contains the entrypoint or map it to a file on the host by mounting a volume (see Passing additional Docker options).

<a name="package"></a> Deployment package - Create or use existing

By default, this module creates deployment package and uses it to create or update Lambda Function or Lambda Layer.

Sometimes, you may want to separate build of deployment package (eg, to compile and install dependencies) from the deployment of a package into two separate steps.

When creating archive locally outside of this module you need to set create_package = false and then argument local_existing_package = "existing_package.zip". Alternatively, you may prefer to keep your deployment packages into S3 bucket and provide a reference to them like this:

  create_package      = false
  s3_existing_package = {
    bucket = "my-bucket-with-lambda-builds"
    key    = "existing_package.zip"
  }

Using deployment package from remote URL

This can be implemented in two steps: download file locally using CURL, and pass path to deployment package as local_existing_package argument.

locals {
  package_url = "https://raw.githubusercontent.com/terraform-aws-modules/terraform-aws-lambda/master/examples/fixtures/python-zip/existing_package.zip"
  downloaded  = "downloaded_package_${md5(local.package_url)}.zip"
}

resource "null_resource" "download_package" {
  triggers = {
    downloaded = local.downloaded
  }

  provisioner "local-exec" {
    command = "curl -L -o ${local.downloaded} ${local.package_url}"
  }
}

data "null_data_source" "downloaded_package" {
  inputs = {
    id       = null_resource.download_package.id
    filename = local.downloaded
  }
}

module "lambda_function_existing_package_from_remote_url" {
  source = "terraform-aws-modules/lambda/aws"

  function_name = "my-lambda-existing-package-local"
  description   = "My awesome lambda function"
  handler       = "index.lambda_handler"
  runtime       = "python3.12"

  create_package         = false
  local_existing_package = data.null_data_source.downloaded_package.outputs["filename"]
}

<a name="sam_cli_integration"></a> How to use AWS SAM CLI to test Lambda Function?

AWS SAM CLI is an open source tool that help the developers to initiate, build, test, and deploy serverless applications. SAM CLI tool supports Terraform applications.

SAM CLI provides two ways of testing: local testing and testing on-cloud (Accelerate).

Local Testing

Using SAM CLI, you can invoke the lambda functions defined in the terraform application locally using the sam local invoke command, providing the function terraform address, or function name, and to set the hook-name to terraform to tell SAM CLI that the underlying project is a terraform application.

You can execute the sam local invoke command from your terraform application root directory as following:

sam local invoke --hook-name terraform module.hello_world_function.aws_lambda_function.this[0]

You can also pass an event to your lambda function, or overwrite its environment variables. Check here for more information.

You can also invoke your lambda function in debugging mode, and step-through your lambda function source code locally in your preferred editor. Check here for more information.

Testing on-cloud (Accelerate)

You can use AWS SAM CLI to quickly test your application on your AWS development account. Using SAM Accelerate, you will be able to develop your lambda functions locally, and once you save your updates, SAM CLI will update your development account with the updated Lambda functions. So, you can test it on cloud, and if there is any bug, you can quickly update the code, and SAM CLI will take care of pushing it to the cloud. Check here for more information about SAM Accelerate.

You can execute the sam sync command from your terraform application root directory as following:

sam sync --hook-name terraform --watch

<a name="deployment"></a> How to deploy and manage Lambda Functions?

Simple deployments

Typically, Lambda Function resource updates when source code changes. If publish = true is specified a new Lambda Function version will also be created.

Published Lambda Function can be invoked using either by version number or using $LATEST. This is the simplest way of deployment which does not required any additional tool or service.

Controlled deployments (rolling, canary, rollbacks)

In order to do controlled deployments (rolling, canary, rollbacks) of Lambda Functions we need to use Lambda Function aliases.

In simple terms, Lambda alias is like a pointer to either one version of Lambda Function (when deployment complete), or to two weighted versions of Lambda Function (during rolling or canary deployment).

One Lambda Function can be used in multiple aliases. Using aliases gives large control of which version deployed when having multiple environments.

There is alias module, which simplifies working with alias (create, manage configurations, updates, etc). See examples/alias for various use-cases how aliases can be configured and used.

There is deploy module, which creates required resources to do deployments using AWS CodeDeploy. It also creates the deployment, and wait for completion. See examples/deploy for complete end-to-end build/update/deploy process.

<a name="terraform-cloud"></a> Terraform CI/CD

Terraform Cloud, Terraform Enterprise, and many other SaaS for running Terraform do not have Python pre-installed on the workers. You will need to provide an alternative Docker image with Python installed to be able to use this module there.

FAQ

Q1: Why deployment package not recreating every time I change something? Or why deployment package is being recreated every time but content has not been changed?

Answer: There can be several reasons related to concurrent executions, or to content hash. Sometimes, changes has happened inside of dependency which is not used in calculating content hash. Or multiple packages are creating at the same time from the same sources. You can force it by setting value of hash_extra to distinct values.

Q2: How to force recreate deployment package?

Answer: Delete an existing zip-archive from builds directory, or make a change in your source code. If there is no zip-archive for the current content hash, it will be recreated during terraform apply.

Q3: null_resource.archive[0] must be replaced

Answer: This probably mean that zip-archive has been deployed, but is currently absent locally, and it has to be recreated locally. When you run into this issue during CI/CD process (where workspace is clean) or from multiple workspaces, you can set environment variable TF_RECREATE_MISSING_LAMBDA_PACKAGE=false or pass recreate_missing_package = false as a parameter to the module and run terraform apply. Alternatively, you can pass trigger_on_package_timestamp = false as a parameter to ignore the file timestamp when deciding to create the archive or not.

Q4: What does this error mean - "We currently do not support adding policies for $LATEST." ?

Answer: When the Lambda function is created with publish = true the new version is automatically increased and a qualified identifier (version number) becomes available and will be used when setting Lambda permissions.

When publish = false (default), only unqualified identifier ($LATEST) is available which leads to the error.

The solution is to either disable the creation of Lambda permissions for the current version by setting create_current_version_allowed_triggers = false, or to enable publish of Lambda function (publish = true).

Notes

  1. Creation of Lambda Functions and Lambda Layers is very similar and both support the same features (building from source path, using existing package, storing package locally or on S3)
  2. Check out this Awesome list of AWS Lambda Layers

Examples

Examples by the users of this module

<!-- BEGINNING OF PRE-COMMIT-TERRAFORM DOCS HOOK -->

Requirements

NameVersion
<a name="requirement_terraform"></a> terraform>= 1.0
<a name="requirement_aws"></a> aws>= 5.32
<a name="requirement_external"></a> external>= 1.0
<a name="requirement_local"></a> local>= 1.0
<a name="requirement_null"></a> null>= 2.0

Providers

NameVersion
<a name="provider_aws"></a> aws>= 5.32
<a name="provider_external"></a> external>= 1.0
<a name="provider_local"></a> local>= 1.0
<a name="provider_null"></a> null>= 2.0

Modules

No modules.

Resources

NameType
aws_cloudwatch_log_group.lambdaresource
aws_iam_policy.additional_inlineresource
aws_iam_policy.additional_jsonresource
aws_iam_policy.additional_jsonsresource
aws_iam_policy.asyncresource
aws_iam_policy.dead_letterresource
aws_iam_policy.logsresource
aws_iam_policy.tracingresource
aws_iam_policy.vpcresource
aws_iam_role.lambdaresource
aws_iam_role_policy_attachment.additional_inlineresource
aws_iam_role_policy_attachment.additional_jsonresource
aws_iam_role_policy_attachment.additional_jsonsresource
aws_iam_role_policy_attachment.additional_manyresource
aws_iam_role_policy_attachment.additional_oneresource
aws_iam_role_policy_attachment.asyncresource
aws_iam_role_policy_attachment.dead_letterresource
aws_iam_role_policy_attachment.logsresource
aws_iam_role_policy_attachment.tracingresource
aws_iam_role_policy_attachment.vpcresource
aws_lambda_event_source_mapping.thisresource
aws_lambda_function.thisresource
aws_lambda_function_event_invoke_config.thisresource
aws_lambda_function_url.thisresource
aws_lambda_layer_version.thisresource
aws_lambda_permission.current_version_triggersresource
aws_lambda_permission.unqualified_alias_triggersresource
aws_lambda_provisioned_concurrency_config.current_versionresource
aws_s3_object.lambda_packageresource
local_file.archive_planresource
null_resource.archiveresource
null_resource.sam_metadata_aws_lambda_functionresource
null_resource.sam_metadata_aws_lambda_layer_versionresource
aws_arn.log_group_arndata source
aws_caller_identity.currentdata source
aws_cloudwatch_log_group.lambdadata source
aws_iam_policy.tracingdata source
aws_iam_policy.vpcdata source
aws_iam_policy_document.additional_inlinedata source
aws_iam_policy_document.assume_roledata source
aws_iam_policy_document.asyncdata source
aws_iam_policy_document.dead_letterdata source
aws_iam_policy_document.logsdata source
aws_partition.currentdata source
aws_region.currentdata source
external_external.archive_preparedata source

Inputs

NameDescriptionTypeDefaultRequired
<a name="input_allowed_triggers"></a> allowed_triggersMap of allowed triggers to create Lambda permissionsmap(any){}no
<a name="input_architectures"></a> architecturesInstruction set architecture for your Lambda function. Valid values are ["x86_64"] and ["arm64"].list(string)nullno
<a name="input_artifacts_dir"></a> artifacts_dirDirectory name where artifacts should be storedstring"builds"no
<a name="input_assume_role_policy_statements"></a> assume_role_policy_statementsMap of dynamic policy statements for assuming Lambda Function role (trust relationship)any{}no
<a name="input_attach_async_event_policy"></a> attach_async_event_policyControls whether async event policy should be added to IAM role for Lambda Functionboolfalseno
<a name="input_attach_cloudwatch_logs_policy"></a> attach_cloudwatch_logs_policyControls whether CloudWatch Logs policy should be added to IAM role for Lambda Functionbooltrueno
<a name="input_attach_create_log_group_permission"></a> attach_create_log_group_permissionControls whether to add the create log group permission to the CloudWatch logs policybooltrueno
<a name="input_attach_dead_letter_policy"></a> attach_dead_letter_policyControls whether SNS/SQS dead letter notification policy should be added to IAM role for Lambda Functionboolfalseno
<a name="input_attach_network_policy"></a> attach_network_policyControls whether VPC/network policy should be added to IAM role for Lambda Functionboolfalseno
<a name="input_attach_policies"></a> attach_policiesControls whether list of policies should be added to IAM role for Lambda Functionboolfalseno
<a name="input_attach_policy"></a> attach_policyControls whether policy should be added to IAM role for Lambda Functionboolfalseno
<a name="input_attach_policy_json"></a> attach_policy_jsonControls whether policy_json should be added to IAM role for Lambda Functionboolfalseno
<a name="input_attach_policy_jsons"></a> attach_policy_jsonsControls whether policy_jsons should be added to IAM role for Lambda Functionboolfalseno
<a name="input_attach_policy_statements"></a> attach_policy_statementsControls whether policy_statements should be added to IAM role for Lambda Functionboolfalseno
<a name="input_attach_tracing_policy"></a> attach_tracing_policyControls whether X-Ray tracing policy should be added to IAM role for Lambda Functionboolfalseno
<a name="input_authorization_type"></a> authorization_typeThe type of authentication that the Lambda Function URL uses. Set to 'AWS_IAM' to restrict access to authenticated IAM users only. Set to 'NONE' to bypass IAM authentication and create a public endpoint.string"NONE"no
<a name="input_build_in_docker"></a> build_in_dockerWhether to build dependencies in Dockerboolfalseno
<a name="input_cloudwatch_logs_kms_key_id"></a> cloudwatch_logs_kms_key_idThe ARN of the KMS Key to use when encrypting log data.stringnullno
<a name="input_cloudwatch_logs_log_group_class"></a> cloudwatch_logs_log_group_classSpecified the log class of the log group. Possible values are: STANDARD or INFREQUENT_ACCESSstringnullno
<a name="input_cloudwatch_logs_retention_in_days"></a> cloudwatch_logs_retention_in_daysSpecifies the number of days you want to retain log events in the specified log group. Possible values are: 1, 3, 5, 7, 14, 30, 60, 90, 120, 150, 180, 365, 400, 545, 731, 1827, and 3653.numbernullno
<a name="input_cloudwatch_logs_skip_destroy"></a> cloudwatch_logs_skip_destroyWhether to keep the log group (and any logs it may contain) at destroy time.boolfalseno
<a name="input_cloudwatch_logs_tags"></a> cloudwatch_logs_tagsA map of tags to assign to the resource.map(string){}no
<a name="input_code_signing_config_arn"></a> code_signing_config_arnAmazon Resource Name (ARN) for a Code Signing Configurationstringnullno
<a name="input_compatible_architectures"></a> compatible_architecturesA list of Architectures Lambda layer is compatible with. Currently x86_64 and arm64 can be specified.list(string)nullno
<a name="input_compatible_runtimes"></a> compatible_runtimesA list of Runtimes this layer is compatible with. Up to 5 runtimes can be specified.list(string)[]no
<a name="input_cors"></a> corsCORS settings to be used by the Lambda Function URLany{}no
<a name="input_create"></a> createControls whether resources should be createdbooltrueno
<a name="input_create_async_event_config"></a> create_async_event_configControls whether async event configuration for Lambda Function/Alias should be createdboolfalseno
<a name="input_create_current_version_allowed_triggers"></a> create_current_version_allowed_triggersWhether to allow triggers on current version of Lambda Function (this will revoke permissions from previous version because Terraform manages only current resources)booltrueno
<a name="input_create_current_version_async_event_config"></a> create_current_version_async_event_configWhether to allow async event configuration on current version of Lambda Function (this will revoke permissions from previous version because Terraform manages only current resources)booltrueno
<a name="input_create_function"></a> create_functionControls whether Lambda Function resource should be createdbooltrueno
<a name="input_create_lambda_function_url"></a> create_lambda_function_urlControls whether the Lambda Function URL resource should be createdboolfalseno
<a name="input_create_layer"></a> create_layerControls whether Lambda Layer resource should be createdboolfalseno
<a name="input_create_package"></a> create_packageControls whether Lambda package should be createdbooltrueno
<a name="input_create_role"></a> create_roleControls whether IAM role for Lambda Function should be createdbooltrueno
<a name="input_create_sam_metadata"></a> create_sam_metadataControls whether the SAM metadata null resource should be createdboolfalseno
<a name="input_create_unqualified_alias_allowed_triggers"></a> create_unqualified_alias_allowed_triggersWhether to allow triggers on unqualified alias pointing to $LATEST versionbooltrueno
<a name="input_create_unqualified_alias_async_event_config"></a> create_unqualified_alias_async_event_configWhether to allow async event configuration on unqualified alias pointing to $LATEST versionbooltrueno
<a name="input_create_unqualified_alias_lambda_function_url"></a> create_unqualified_alias_lambda_function_urlWhether to use unqualified alias pointing to $LATEST version in Lambda Function URLbooltrueno
<a name="input_dead_letter_target_arn"></a> dead_letter_target_arnThe ARN of an SNS topic or SQS queue to notify when an invocation fails.stringnullno
<a name="input_description"></a> descriptionDescription of your Lambda Function (or Layer)string""no
<a name="input_destination_on_failure"></a> destination_on_failureAmazon Resource Name (ARN) of the destination resource for failed asynchronous invocationsstringnullno
<a name="input_destination_on_success"></a> destination_on_successAmazon Resource Name (ARN) of the destination resource for successful asynchronous invocationsstringnullno
<a name="input_docker_additional_options"></a> docker_additional_optionsAdditional options to pass to the docker run command (e.g. to set environment variables, volumes, etc.)list(string)[]no
<a name="input_docker_build_root"></a> docker_build_rootRoot dir where to build in Dockerstring""no
<a name="input_docker_entrypoint"></a> docker_entrypointPath to the Docker entrypoint to usestringnullno
<a name="input_docker_file"></a> docker_filePath to a Dockerfile when building in Dockerstring""no
<a name="input_docker_image"></a> docker_imageDocker image to use for the buildstring""no
<a name="input_docker_pip_cache"></a> docker_pip_cacheWhether to mount a shared pip cache folder into docker environment or notanynullno
<a name="input_docker_with_ssh_agent"></a> docker_with_ssh_agentWhether to pass SSH_AUTH_SOCK into docker environment or notboolfalseno
<a name="input_environment_variables"></a> environment_variablesA map that defines environment variables for the Lambda Function.map(string){}no
<a name="input_ephemeral_storage_size"></a> ephemeral_storage_sizeAmount of ephemeral storage (/tmp) in MB your Lambda Function can use at runtime. Valid value between 512 MB to 10,240 MB (10 GB).number512no
<a name="input_event_source_mapping"></a> event_source_mappingMap of event source mappingany{}no
<a name="input_file_system_arn"></a> file_system_arnThe Amazon Resource Name (ARN) of the Amazon EFS Access Point that provides access to the file system.stringnullno
<a name="input_file_system_local_mount_path"></a> file_system_local_mount_pathThe path where the function can access the file system, starting with /mnt/.stringnullno
<a name="input_function_name"></a> function_nameA unique name for your Lambda Functionstring""no
<a name="input_function_tags"></a> function_tagsA map of tags to assign only to the lambda functionmap(string){}no
<a name="input_handler"></a> handlerLambda Function entrypoint in your codestring""no
<a name="input_hash_extra"></a> hash_extraThe string to add into hashing function. Useful when building same source path for different functions.string""no
<a name="input_ignore_source_code_hash"></a> ignore_source_code_hashWhether to ignore changes to the function's source code hash. Set to true if you manage infrastructure and code deployments separately.boolfalseno
<a name="input_image_config_command"></a> image_config_commandThe CMD for the docker imagelist(string)[]no
<a name="input_image_config_entry_point"></a> image_config_entry_pointThe ENTRYPOINT for the docker imagelist(string)[]no
<a name="input_image_config_working_directory"></a> image_config_working_directoryThe working directory for the docker imagestringnullno
<a name="input_image_uri"></a> image_uriThe ECR image URI containing the function's deployment package.stringnullno
<a name="input_invoke_mode"></a> invoke_modeInvoke mode of the Lambda Function URL. Valid values are BUFFERED (default) and RESPONSE_STREAM.stringnullno
<a name="input_kms_key_arn"></a> kms_key_arnThe ARN of KMS key to use by your Lambda Functionstringnullno
<a name="input_lambda_at_edge"></a> lambda_at_edgeSet this to true if using Lambda@Edge, to enable publishing, limit the timeout, and allow edgelambda.amazonaws.com to invoke the functionboolfalseno
<a name="input_lambda_at_edge_logs_all_regions"></a> lambda_at_edge_logs_all_regionsWhether to specify a wildcard in IAM policy used by Lambda@Edge to allow logging in all regionsbooltrueno
<a name="input_lambda_role"></a> lambda_roleIAM role ARN attached to the Lambda Function. This governs both who / what can invoke your Lambda Function, as well as what resources our Lambda Function has access to. See Lambda Permission Model for more details.string""no
<a name="input_layer_name"></a> layer_nameName of Lambda Layer to createstring""no
<a name="input_layer_skip_destroy"></a> layer_skip_destroyWhether to retain the old version of a previously deployed Lambda Layer.boolfalseno
<a name="input_layers"></a> layersList of Lambda Layer Version ARNs (maximum of 5) to attach to your Lambda Function.list(string)nullno
<a name="input_license_info"></a> license_infoLicense info for your Lambda Layer. Eg, MIT or full url of a license.string""no
<a name="input_local_existing_package"></a> local_existing_packageThe absolute path to an existing zip-file to usestringnullno
<a name="input_logging_application_log_level"></a> logging_application_log_levelThe application log level of the Lambda Function. Valid values are "TRACE", "DEBUG", "INFO", "WARN", "ERROR", or "FATAL".string"INFO"no
<a name="input_logging_log_format"></a> logging_log_formatThe log format of the Lambda Function. Valid values are "JSON" or "Text".string"Text"no
<a name="input_logging_log_group"></a> logging_log_groupThe CloudWatch log group to send logs to.stringnullno
<a name="input_logging_system_log_level"></a> logging_system_log_levelThe system log level of the Lambda Function. Valid values are "DEBUG", "INFO", or "WARN".string"INFO"no
<a name="input_maximum_event_age_in_seconds"></a> maximum_event_age_in_secondsMaximum age of a request that Lambda sends to a function for processing in seconds. Valid values between 60 and 21600.numbernullno
<a name="input_maximum_retry_attempts"></a> maximum_retry_attemptsMaximum number of times to retry when the function returns an error. Valid values between 0 and 2. Defaults to 2.numbernullno
<a name="input_memory_size"></a> memory_sizeAmount of memory in MB your Lambda Function can use at runtime. Valid value between 128 MB to 10,240 MB (10 GB), in 64 MB increments.number128no
<a name="input_number_of_policies"></a> number_of_policiesNumber of policies to attach to IAM role for Lambda Functionnumber0no
<a name="input_number_of_policy_jsons"></a> number_of_policy_jsonsNumber of policies JSON to attach to IAM role for Lambda Functionnumber0no
<a name="input_package_type"></a> package_typeThe Lambda deployment package type. Valid options: Zip or Imagestring"Zip"no
<a name="input_policies"></a> policiesList of policy statements ARN to attach to Lambda Function rolelist(string)[]no
<a name="input_policy"></a> policyAn additional policy document ARN to attach to the Lambda Function rolestringnullno
<a name="input_policy_json"></a> policy_jsonAn additional policy document as JSON to attach to the Lambda Function rolestringnullno
<a name="input_policy_jsons"></a> policy_jsonsList of additional policy documents as JSON to attach to Lambda Function rolelist(string)[]no
<a name="input_policy_name"></a> policy_nameIAM policy name. It override the default value, which is the same as role_namestringnullno
<a name="input_policy_path"></a> policy_pathPath of policies to that should be added to IAM role for Lambda Functionstringnullno
<a name="input_policy_statements"></a> policy_statementsMap of dynamic policy statements to attach to Lambda Function roleany{}no
<a name="input_provisioned_concurrent_executions"></a> provisioned_concurrent_executionsAmount of capacity to allocate. Set to 1 or greater to enable, or set to 0 to disable provisioned concurrency.number-1no
<a name="input_publish"></a> publishWhether to publish creation/change as new Lambda Function Version.boolfalseno
<a name="input_putin_khuylo"></a> putin_khuyloDo you agree that Putin doesn't respect Ukrainian sovereignty and territorial integrity? More info: https://en.wikipedia.org/wiki/Putin_khuylo!booltrueno
<a name="input_recreate_missing_package"></a> recreate_missing_packageWhether to recreate missing Lambda package if it is missing locally or notbooltrueno
<a name="input_replace_security_groups_on_destroy"></a> replace_security_groups_on_destroy(Optional) When true, all security groups defined in vpc_security_group_ids will be replaced with the default security group after the function is destroyed. Set the replacement_security_group_ids variable to use a custom list of security groups for replacement instead.boolnullno
<a name="input_replacement_security_group_ids"></a> replacement_security_group_ids(Optional) List of security group IDs to assign to orphaned Lambda function network interfaces upon destruction. replace_security_groups_on_destroy must be set to true to use this attribute.list(string)nullno
<a name="input_reserved_concurrent_executions"></a> reserved_concurrent_executionsThe amount of reserved concurrent executions for this Lambda Function. A value of 0 disables Lambda Function from being triggered and -1 removes any concurrency limitations. Defaults to Unreserved Concurrency Limits -1.number-1no
<a name="input_role_description"></a> role_descriptionDescription of IAM role to use for Lambda Functionstringnullno
<a name="input_role_force_detach_policies"></a> role_force_detach_policiesSpecifies to force detaching any policies the IAM role has before destroying it.booltrueno
<a name="input_role_maximum_session_duration"></a> role_maximum_session_durationMaximum session duration, in seconds, for the IAM rolenumber3600no
<a name="input_role_name"></a> role_nameName of IAM role to use for Lambda Functionstringnullno
<a name="input_role_path"></a> role_pathPath of IAM role to use for Lambda Functionstringnullno
<a name="input_role_permissions_boundary"></a> role_permissions_boundaryThe ARN of the policy that is used to set the permissions boundary for the IAM role used by Lambda Functionstringnullno
<a name="input_role_tags"></a> role_tagsA map of tags to assign to IAM rolemap(string){}no
<a name="input_runtime"></a> runtimeLambda Function runtimestring""no
<a name="input_s3_acl"></a> s3_aclThe canned ACL to apply. Valid values are private, public-read, public-read-write, aws-exec-read, authenticated-read, bucket-owner-read, and bucket-owner-full-control. Defaults to private.string"private"no
<a name="input_s3_bucket"></a> s3_bucketS3 bucket to store artifactsstringnullno
<a name="input_s3_existing_package"></a> s3_existing_packageThe S3 bucket object with keys bucket, key, version pointing to an existing zip-file to usemap(string)nullno
<a name="input_s3_kms_key_id"></a> s3_kms_key_idSpecifies a custom KMS key to use for S3 object encryption.stringnullno
<a name="input_s3_object_override_default_tags"></a> s3_object_override_default_tagsWhether to override the default_tags from provider? NB: S3 objects support a maximum of 10 tags.boolfalseno
<a name="input_s3_object_storage_class"></a> s3_object_storage_classSpecifies the desired Storage Class for the artifact uploaded to S3. Can be either STANDARD, REDUCED_REDUNDANCY, ONEZONE_IA, INTELLIGENT_TIERING, or STANDARD_IA.string"ONEZONE_IA"no
<a name="input_s3_object_tags"></a> s3_object_tagsA map of tags to assign to S3 bucket object.map(string){}no
<a name="input_s3_object_tags_only"></a> s3_object_tags_onlySet to true to not merge tags with s3_object_tags. Useful to avoid breaching S3 Object 10 tag limit.boolfalseno
<a name="input_s3_prefix"></a> s3_prefixDirectory name where artifacts should be stored in the S3 bucket. If unset, the path from artifacts_dir is usedstringnullno
<a name="input_s3_server_side_encryption"></a> s3_server_side_encryptionSpecifies server-side encryption of the object in S3. Valid values are "AES256" and "aws:kms".stringnullno
<a name="input_skip_destroy"></a> skip_destroySet to true if you do not wish the function to be deleted at destroy time, and instead just remove the function from the Terraform state. Useful for Lambda@Edge functions attached to CloudFront distributions.boolnullno
<a name="input_snap_start"></a> snap_start(Optional) Snap start settings for low-latency startupsboolfalseno
<a name="input_source_path"></a> source_pathThe absolute path to a local file or directory containing your Lambda source codeanynullno
<a name="input_store_on_s3"></a> store_on_s3Whether to store produced artifacts on S3 or locally.boolfalseno
<a name="input_tags"></a> tagsA map of tags to assign to resources.map(string){}no
<a name="input_timeout"></a> timeoutThe amount of time your Lambda Function has to run in seconds.number3no
<a name="input_timeouts"></a> timeoutsDefine maximum timeout for creating, updating, and deleting Lambda Function resourcesmap(string){}no
<a name="input_tracing_mode"></a> tracing_modeTracing mode of the Lambda Function. Valid value can be either PassThrough or Active.stringnullno
<a name="input_trigger_on_package_timestamp"></a> trigger_on_package_timestampWhether to recreate the Lambda package if the timestamp changesbooltrueno
<a name="input_trusted_entities"></a> trusted_entitiesList of additional trusted entities for assuming Lambda Function role (trust relationship)any[]no
<a name="input_use_existing_cloudwatch_log_group"></a> use_existing_cloudwatch_log_groupWhether to use an existing CloudWatch log group or create newboolfalseno
<a name="input_vpc_security_group_ids"></a> vpc_security_group_idsList of security group ids when Lambda Function should run in the VPC.list(string)nullno
<a name="input_vpc_subnet_ids"></a> vpc_subnet_idsList of subnet ids when Lambda Function should run in the VPC. Usually private or intra subnets.list(string)nullno

Outputs

NameDescription
<a name="output_lambda_cloudwatch_log_group_arn"></a> lambda_cloudwatch_log_group_arnThe ARN of the Cloudwatch Log Group
<a name="output_lambda_cloudwatch_log_group_name"></a> lambda_cloudwatch_log_group_nameThe name of the Cloudwatch Log Group
<a name="output_lambda_event_source_mapping_function_arn"></a> lambda_event_source_mapping_function_arnThe the ARN of the Lambda function the event source mapping is sending events to
<a name="output_lambda_event_source_mapping_state"></a> lambda_event_source_mapping_stateThe state of the event source mapping
<a name="output_lambda_event_source_mapping_state_transition_reason"></a> lambda_event_source_mapping_state_transition_reasonThe reason the event source mapping is in its current state
<a name="output_lambda_event_source_mapping_uuid"></a> lambda_event_source_mapping_uuidThe UUID of the created event source mapping
<a name="output_lambda_function_arn"></a> lambda_function_arnThe ARN of the Lambda Function
<a name="output_lambda_function_arn_static"></a> lambda_function_arn_staticThe static ARN of the Lambda Function. Use this to avoid cycle errors between resources (e.g., Step Functions)
<a name="output_lambda_function_invoke_arn"></a> lambda_function_invoke_arnThe Invoke ARN of the Lambda Function
<a name="output_lambda_function_kms_key_arn"></a> lambda_function_kms_key_arnThe ARN for the KMS encryption key of Lambda Function
<a name="output_lambda_function_last_modified"></a> lambda_function_last_modifiedThe date Lambda Function resource was last modified
<a name="output_lambda_function_name"></a> lambda_function_nameThe name of the Lambda Function
<a name="output_lambda_function_qualified_arn"></a> lambda_function_qualified_arnThe ARN identifying your Lambda Function Version
<a name="output_lambda_function_qualified_invoke_arn"></a> lambda_function_qualified_invoke_arnThe Invoke ARN identifying your Lambda Function Version
<a name="output_lambda_function_signing_job_arn"></a> lambda_function_signing_job_arnARN of the signing job
<a name="output_lambda_function_signing_profile_version_arn"></a> lambda_function_signing_profile_version_arnARN of the signing profile version
<a name="output_lambda_function_source_code_hash"></a> lambda_function_source_code_hashBase64-encoded representation of raw SHA-256 sum of the zip file
<a name="output_lambda_function_source_code_size"></a> lambda_function_source_code_sizeThe size in bytes of the function .zip file
<a name="output_lambda_function_url"></a> lambda_function_urlThe URL of the Lambda Function URL
<a name="output_lambda_function_url_id"></a> lambda_function_url_idThe Lambda Function URL generated id
<a name="output_lambda_function_version"></a> lambda_function_versionLatest published version of Lambda Function
<a name="output_lambda_layer_arn"></a> lambda_layer_arnThe ARN of the Lambda Layer with version
<a name="output_lambda_layer_created_date"></a> lambda_layer_created_dateThe date Lambda Layer resource was created
<a name="output_lambda_layer_layer_arn"></a> lambda_layer_layer_arnThe ARN of the Lambda Layer without version
<a name="output_lambda_layer_source_code_size"></a> lambda_layer_source_code_sizeThe size in bytes of the Lambda Layer .zip file
<a name="output_lambda_layer_version"></a> lambda_layer_versionThe Lambda Layer version
<a name="output_lambda_role_arn"></a> lambda_role_arnThe ARN of the IAM role created for the Lambda Function
<a name="output_lambda_role_name"></a> lambda_role_nameThe name of the IAM role created for the Lambda Function
<a name="output_lambda_role_unique_id"></a> lambda_role_unique_idThe unique id of the IAM role created for the Lambda Function
<a name="output_local_filename"></a> local_filenameThe filename of zip archive deployed (if deployment was from local)
<a name="output_s3_object"></a> s3_objectThe map with S3 object data of zip archive deployed (if deployment was from S3)
<!-- END OF PRE-COMMIT-TERRAFORM DOCS HOOK -->

Development

Python

During development involving modifying python files, use tox to run unit tests:

tox

This will try to run unit tests which each supported python version, reporting errors for python versions which are not installed locally.

If you only want to test against your main python version:

tox -e py

You can also pass additional positional arguments to pytest which is used to run test, e.g. to make it verbose:

tox -e py -- -vvv

Authors

Module managed by Anton Babenko. Check out serverless.tf to learn more about doing serverless with Terraform.

Please reach out to Betajob if you are looking for commercial support for your Terraform, AWS, or serverless project.

License

Apache 2 Licensed. See LICENSE for full details.

Additional information for users from Russia and Belarus