Home

Awesome

AWS LoadTest Distribuited Terraform Module

This module proposes a simple and uncomplicated way to run your load tests created with JMeter, Locust, K6 or TaurusBzt on AWS as IaaS.

bp

Basic usage with JMeter

module "loadtest-distribuited" {

    source  = "marcosborges/loadtest-distribuited/aws"

    name = "nome-da-implantacao"
    executor = "jmeter"
    loadtest_dir_source = "examples/plan/"
    nodes_size = 2
    
    loadtest_entrypoint = "jmeter -n -t jmeter/basic.jmx  -R \"{NODES_IPS}\" -l /var/logs/loadtest -e -o /var/www/html -Dnashorn.args=--no-deprecation-warning -Dserver.rmi.ssl.disable=true "

    subnet_id = data.aws_subnet.current.id
}

data "aws_subnet" "current" {
    filter {
        name   = "tag:Name"
        values = ["my-subnet-name"]
    }
}

bp

bp


Basic usage with Taurus

In its basic use it is necessary to provide information about which network will be used, where are your test plan scripts and finally define the number of nodes needed to carry out the desired load.

module "loadtest-distribuited" {

    source  = "marcosborges/loadtest-distribuited/aws"

    name = "nome-da-implantacao"
    executor = "jmeter"
    loadtest_dir_source = "examples/plan/"
    nodes_size = 2
    
    loadtest_entrypoint = "bzt -q -o execution.0.distributed=\"{NODES_IPS}\" taurus/basic.yml"

    subnet_id = data.aws_subnet.current.id
}

data "aws_subnet" "current" {
    filter {
        name   = "tag:Name"
        values = ["my-subnet-name"]
    }
}

Basic usage with Locust

module "loadtest-distribuited" {

    source  = "marcosborges/loadtest-distribuited/aws"

    name = "nome-da-implantacao"
    nodes_size = 2
    executor = "locust"

    loadtest_dir_source = "examples/plan/"
    locust_plan_filename = "basic.py"
    
    loadtest_entrypoint = <<-EOT
        nohup locust \
            -f ${var.locust_plan_filename} \
            --web-port=8080 \
            --expect-workers=${var.node_size} \
            --master > locust-leader.out 2>&1 &
    EOT

    node_custom_entrypoint = <<-EOT
        nohup locust \
            -f ${var.locust_plan_filename} \
            --worker \
            --master-host={LEADER_IP} > locust-worker.out 2>&1 &
    EOT

    subnet_id = data.aws_subnet.current.id
}

data "aws_subnet" "current" {
    filter {
        name   = "tag:Name"
        values = ["my-subnet-name"]
    }
}

Advanced Config:

The module also provides advanced settings.

  1. It is possible to automate the splitting of the contents of a bulk file between the load nodes.

  2. It is possible to export the ssh key used in remote access.

  3. We can define a pre-configured and customized image.

  4. We can customize too many instances provisioning parameters: tags, monitoring, public_ip, security_group, etc...

module "loadtest" {


    source  = "marcosborges/loadtest-distribuited/aws"

    name = "nome-da-implantacao"
    executor = "bzt"
    loadtest_dir_source = "examples/plan/"

    loadtest_dir_destination = "/loadtest"
    loadtest_entrypoint = "bzt -q -o execution.0.distributed=\"{NODES_IPS}\" taurus/basic.yml"
    nodes_size = 3

    
    subnet_id = data.aws_subnet.current.id
    
    #AUTO SPLIT
    split_data_mass_between_nodes = {
        enable = true
        data_mass_filenames = [
            "data/users.csv"
        ]
    }

    #EXPORT SSH KEY
    ssh_export_pem = true

    #CUSTOMIZE IMAGE
    leader_ami_id = data.aws_ami.my_image.id
    nodes_ami_id = data.aws_ami.my_image.id

    #CUSTOMIZE TAGS
    leader_tags = {
        "Name" = "nome-da-implantacao-leader",
        "Owner": "nome-do-proprietario",
        "Environment": "producao",
        "Role": "leader"
    }
    nodes_tags = {
        "Name": "nome-da-implantacao",
        "Owner": "nome-do-proprietario",
        "Environment": "producao",
        "Role": "node"
    }
    tags = {
        "Name": "nome-da-implantacao",
        "Owner": "nome-do-proprietario",
        "Environment": "producao"
    }
 
    # SETUP INSTANCE SIZE
    leader_instance_type = "t2.medium"
    nodes_instance_type = "t2.medium"
 
    # SETUP JVM PARAMETERS
    leader_jvm_args = " -Xms12g -Xmx80g -XX:MaxMetaspaceSize=512m -XX:+UseG1GC -XX:MaxGCPauseMillis=100 -XX:G1ReservePercent=20 "
    nodes_jvm_args = " -Xms12g -Xmx80g -XX:MaxMetaspaceSize=512m -XX:+UseG1GC -XX:MaxGCPauseMillis=100 -XX:G1ReservePercent=20 "

    # DISABLE AUTO SETUP
    auto_setup = false

    # SET JMETER VERSION. WORK ONLY WHEN AUTO-SETUP IS TRUE
    jmeter_version = "5.4.1"

    # ASSOCIATE PUBLIC IP
    leader_associate_public_ip_address = true
    nodes_associate_public_ip_address = true
    
    # ENABLE MONITORING
    leader_monitoring = true
    nodes_monitoring = true

    #  SETUP SSH USERNAME
    ssh_user = "ec2-user"

    # SETUP ALLOWEDs CIDRS FOR SSH ACCESS
    ssh_cidr_ingress_block = ["0.0.0.0/0"]
    
}

data "aws_subnet" "current" {
    filter {
        name   = "tag:Name"
        values = ["my-subnet-name"]
    }
}

data "aws_ami" "my_image" {
    most_recent = true
    filter {
        name   = "owner-alias"
        values = ["amazon"]
    }
    filter {
        name   = "name"
        values = ["amzn2-ami-hvm*"]
    }
}


Sugestion

The C5 family of instances is a good choice for the load test.

ModelvCPUMem (GiB)Storage (GiB)Network Band. (Gbps)
c5n.large25.25EBS25 -> 4.750
c5n.xlarge410.5EBS25 -> 4.750
c5n.2xlarge821EBS25 -> 4.750
c5n.4xlarge1642EBS25 4.750
c5n.9xlarge3696EBS50 9.500
c5n.18xlarge72192EBS100 19.000
c5n.metal72192EBS100 19.000

<!-- BEGIN_TF_DOCS -->

Requirements

NameVersion
<a name="requirement_terraform"></a> terraform>= 0.13.1
<a name="requirement_aws"></a> aws>= 3.63

Providers

NameVersion
<a name="provider_aws"></a> aws>= 3.63
<a name="provider_null"></a> nulln/a
<a name="provider_tls"></a> tlsn/a

Modules

No modules.

Resources

NameType
aws_iam_instance_profile.loadtestresource
aws_iam_role.loadtestresource
aws_instance.leaderresource
aws_instance.nodesresource
aws_key_pair.loadtestresource
aws_security_group.loadtestresource
null_resource.executorresource
null_resource.key_pair_exporterresource
null_resource.publish_split_dataresource
null_resource.split_dataresource
tls_private_key.loadtestresource
aws_ami.amazon_linux_2data source
aws_subnet.currentdata source
aws_vpc.currentdata source

Inputs

NameDescriptionTypeDefaultRequired
<a name="input_auto_execute"></a> auto_executeExecute Loadtest after leader and nodes availablebooltrueno
<a name="input_auto_setup"></a> auto_setupInstall and configure instances Amazon Linux2 with JMeter and Taurusbooltrueno
<a name="input_executor"></a> executorExecutor of the loadteststring"jmeter"no
<a name="input_jmeter_version"></a> jmeter_versionJMeter versionstring"5.4.1"no
<a name="input_leader_ami_id"></a> leader_ami_idId of the AMIstring""no
<a name="input_leader_associate_public_ip_address"></a> leader_associate_public_ip_addressAssociate public IP address to the leaderbooltrueno
<a name="input_leader_custom_setup_base64"></a> leader_custom_setup_base64Custom bash script encoded in base64 to setup the leaderstring""no
<a name="input_leader_instance_type"></a> leader_instance_typeInstance type of the cluster leaderstring"t2.medium"no
<a name="input_leader_jvm_args"></a> leader_jvm_argsJVM Leader JVM_ARGSstring" -Xms2g -Xmx2g -XX:MaxMetaspaceSize=256m -XX:+UseG1GC -XX:MaxGCPauseMillis=100 -XX:G1ReservePercent=20 "no
<a name="input_leader_monitoring"></a> leader_monitoringEnable monitoring for the leaderbooltrueno
<a name="input_leader_tags"></a> leader_tagsTags of the cluster leadermap{}no
<a name="input_loadtest_dir_destination"></a> loadtest_dir_destinationPath to the destination loadtest directorystring"/loadtest"no
<a name="input_loadtest_dir_source"></a> loadtest_dir_sourcePath to the source loadtest directorystringn/ayes
<a name="input_loadtest_entrypoint"></a> loadtest_entrypointPath to the entrypoint commandstring"bzt -q -o execution.0.distributed=\"{NODES_IPS}\" *.yml"no
<a name="input_name"></a> nameName of the provisionstringn/ayes
<a name="input_nodes_ami_id"></a> nodes_ami_idId of the AMIstring""no
<a name="input_nodes_associate_public_ip_address"></a> nodes_associate_public_ip_addressAssociate public IP address to the nodesbooltrueno
<a name="input_nodes_custom_setup_base64"></a> nodes_custom_setup_base64Custom bash script encoded in base64 to setup the nodesstring""no
<a name="input_nodes_instance_type"></a> nodes_instance_typeInstance type of the cluster nodesstring"t2.medium"no
<a name="input_nodes_jvm_args"></a> nodes_jvm_argsJVM Nodes JVM_ARGSstring"-Xms2g -Xmx2g -XX:MaxMetaspaceSize=256m -XX:+UseG1GC -XX:MaxGCPauseMillis=100 -XX:G1ReservePercent=20 -Dnashorn.args=--no-deprecation-warning -XX:+HeapDumpOnOutOfMemoryError "no
<a name="input_nodes_monitoring"></a> nodes_monitoringEnable monitoring for the leaderbooltrueno
<a name="input_nodes_size"></a> nodes_sizeTotal number of nodes in the clusternumber2no
<a name="input_nodes_tags"></a> nodes_tagsTags of the cluster nodesmap{}no
<a name="input_region"></a> regionName of the regionstring"us-east-1"no
<a name="input_split_data_mass_between_nodes"></a> split_data_mass_between_nodesSplit data mass between nodes<pre>object({<br> enable = bool<br> data_mass_filename = string<br> })</pre><pre>{<br> "data_mass_filename": "../plan/data/data.csv",<br> "enable": false<br>}</pre>no
<a name="input_ssh_cidr_ingress_blocks"></a> ssh_cidr_ingress_blocksSSH user for the leaderlist<pre>[<br> "0.0.0.0/0"<br>]</pre>no
<a name="input_ssh_export_pem"></a> ssh_export_pemn/aboolfalseno
<a name="input_ssh_user"></a> ssh_userSSH user for the leaderstring"ec2-user"no
<a name="input_subnet_id"></a> subnet_idId of the subnetstringn/ayes
<a name="input_tags"></a> tagsCommon tagsmap{}no
<a name="input_taurus_version"></a> taurus_versionTaurus versionstring"1.16.0"no
<a name="input_web_cidr_ingress_blocks"></a> web_cidr_ingress_blocksweb for the leaderlist<pre>[<br> "0.0.0.0/0"<br>]</pre>no

Outputs

NameDescription
<a name="output_leader_private_ip"></a> leader_private_ipThe private IP address of the leader server instance.
<a name="output_leader_public_ip"></a> leader_public_ipThe public IP address of the leader server instance.
<a name="output_nodes_private_ip"></a> nodes_private_ipThe private IP address of the nodes instances.
<a name="output_nodes_public_ip"></a> nodes_public_ipThe public IP address of the nodes instances.
<!-- END_TF_DOCS -->