Awesome
AWS LoadTest Distribuited Terraform Module
This module proposes a simple and uncomplicated way to run your load tests created with JMeter, Locust, K6 or TaurusBzt on AWS as IaaS.
Basic usage with JMeter
module "loadtest-distribuited" {
source = "marcosborges/loadtest-distribuited/aws"
name = "nome-da-implantacao"
executor = "jmeter"
loadtest_dir_source = "examples/plan/"
nodes_size = 2
loadtest_entrypoint = "jmeter -n -t jmeter/basic.jmx -R \"{NODES_IPS}\" -l /var/logs/loadtest -e -o /var/www/html -Dnashorn.args=--no-deprecation-warning -Dserver.rmi.ssl.disable=true "
subnet_id = data.aws_subnet.current.id
}
data "aws_subnet" "current" {
filter {
name = "tag:Name"
values = ["my-subnet-name"]
}
}
Basic usage with Taurus
In its basic use it is necessary to provide information about which network will be used, where are your test plan scripts and finally define the number of nodes needed to carry out the desired load.
module "loadtest-distribuited" {
source = "marcosborges/loadtest-distribuited/aws"
name = "nome-da-implantacao"
executor = "jmeter"
loadtest_dir_source = "examples/plan/"
nodes_size = 2
loadtest_entrypoint = "bzt -q -o execution.0.distributed=\"{NODES_IPS}\" taurus/basic.yml"
subnet_id = data.aws_subnet.current.id
}
data "aws_subnet" "current" {
filter {
name = "tag:Name"
values = ["my-subnet-name"]
}
}
Basic usage with Locust
module "loadtest-distribuited" {
source = "marcosborges/loadtest-distribuited/aws"
name = "nome-da-implantacao"
nodes_size = 2
executor = "locust"
loadtest_dir_source = "examples/plan/"
locust_plan_filename = "basic.py"
loadtest_entrypoint = <<-EOT
nohup locust \
-f ${var.locust_plan_filename} \
--web-port=8080 \
--expect-workers=${var.node_size} \
--master > locust-leader.out 2>&1 &
EOT
node_custom_entrypoint = <<-EOT
nohup locust \
-f ${var.locust_plan_filename} \
--worker \
--master-host={LEADER_IP} > locust-worker.out 2>&1 &
EOT
subnet_id = data.aws_subnet.current.id
}
data "aws_subnet" "current" {
filter {
name = "tag:Name"
values = ["my-subnet-name"]
}
}
Advanced Config:
The module also provides advanced settings.
-
It is possible to automate the splitting of the contents of a bulk file between the load nodes.
-
It is possible to export the ssh key used in remote access.
-
We can define a pre-configured and customized image.
-
We can customize too many instances provisioning parameters: tags, monitoring, public_ip, security_group, etc...
module "loadtest" {
source = "marcosborges/loadtest-distribuited/aws"
name = "nome-da-implantacao"
executor = "bzt"
loadtest_dir_source = "examples/plan/"
loadtest_dir_destination = "/loadtest"
loadtest_entrypoint = "bzt -q -o execution.0.distributed=\"{NODES_IPS}\" taurus/basic.yml"
nodes_size = 3
subnet_id = data.aws_subnet.current.id
#AUTO SPLIT
split_data_mass_between_nodes = {
enable = true
data_mass_filenames = [
"data/users.csv"
]
}
#EXPORT SSH KEY
ssh_export_pem = true
#CUSTOMIZE IMAGE
leader_ami_id = data.aws_ami.my_image.id
nodes_ami_id = data.aws_ami.my_image.id
#CUSTOMIZE TAGS
leader_tags = {
"Name" = "nome-da-implantacao-leader",
"Owner": "nome-do-proprietario",
"Environment": "producao",
"Role": "leader"
}
nodes_tags = {
"Name": "nome-da-implantacao",
"Owner": "nome-do-proprietario",
"Environment": "producao",
"Role": "node"
}
tags = {
"Name": "nome-da-implantacao",
"Owner": "nome-do-proprietario",
"Environment": "producao"
}
# SETUP INSTANCE SIZE
leader_instance_type = "t2.medium"
nodes_instance_type = "t2.medium"
# SETUP JVM PARAMETERS
leader_jvm_args = " -Xms12g -Xmx80g -XX:MaxMetaspaceSize=512m -XX:+UseG1GC -XX:MaxGCPauseMillis=100 -XX:G1ReservePercent=20 "
nodes_jvm_args = " -Xms12g -Xmx80g -XX:MaxMetaspaceSize=512m -XX:+UseG1GC -XX:MaxGCPauseMillis=100 -XX:G1ReservePercent=20 "
# DISABLE AUTO SETUP
auto_setup = false
# SET JMETER VERSION. WORK ONLY WHEN AUTO-SETUP IS TRUE
jmeter_version = "5.4.1"
# ASSOCIATE PUBLIC IP
leader_associate_public_ip_address = true
nodes_associate_public_ip_address = true
# ENABLE MONITORING
leader_monitoring = true
nodes_monitoring = true
# SETUP SSH USERNAME
ssh_user = "ec2-user"
# SETUP ALLOWEDs CIDRS FOR SSH ACCESS
ssh_cidr_ingress_block = ["0.0.0.0/0"]
}
data "aws_subnet" "current" {
filter {
name = "tag:Name"
values = ["my-subnet-name"]
}
}
data "aws_ami" "my_image" {
most_recent = true
filter {
name = "owner-alias"
values = ["amazon"]
}
filter {
name = "name"
values = ["amzn2-ami-hvm*"]
}
}
Sugestion
The C5 family of instances is a good choice for the load test.
Model | vCPU | Mem (GiB) | Storage (GiB) | Network Band. (Gbps) |
---|---|---|---|---|
c5n.large | 2 | 5.25 | EBS | 25 -> 4.750 |
c5n.xlarge | 4 | 10.5 | EBS | 25 -> 4.750 |
c5n.2xlarge | 8 | 21 | EBS | 25 -> 4.750 |
c5n.4xlarge | 16 | 42 | EBS | 25 4.750 |
c5n.9xlarge | 36 | 96 | EBS | 50 9.500 |
c5n.18xlarge | 72 | 192 | EBS | 100 19.000 |
c5n.metal | 72 | 192 | EBS | 100 19.000 |
<!-- BEGIN_TF_DOCS -->
Requirements
Name | Version |
---|---|
<a name="requirement_terraform"></a> terraform | >= 0.13.1 |
<a name="requirement_aws"></a> aws | >= 3.63 |
Providers
Name | Version |
---|---|
<a name="provider_aws"></a> aws | >= 3.63 |
<a name="provider_null"></a> null | n/a |
<a name="provider_tls"></a> tls | n/a |
Modules
No modules.
Resources
Name | Type |
---|---|
aws_iam_instance_profile.loadtest | resource |
aws_iam_role.loadtest | resource |
aws_instance.leader | resource |
aws_instance.nodes | resource |
aws_key_pair.loadtest | resource |
aws_security_group.loadtest | resource |
null_resource.executor | resource |
null_resource.key_pair_exporter | resource |
null_resource.publish_split_data | resource |
null_resource.split_data | resource |
tls_private_key.loadtest | resource |
aws_ami.amazon_linux_2 | data source |
aws_subnet.current | data source |
aws_vpc.current | data source |
Inputs
Name | Description | Type | Default | Required |
---|---|---|---|---|
<a name="input_auto_execute"></a> auto_execute | Execute Loadtest after leader and nodes available | bool | true | no |
<a name="input_auto_setup"></a> auto_setup | Install and configure instances Amazon Linux2 with JMeter and Taurus | bool | true | no |
<a name="input_executor"></a> executor | Executor of the loadtest | string | "jmeter" | no |
<a name="input_jmeter_version"></a> jmeter_version | JMeter version | string | "5.4.1" | no |
<a name="input_leader_ami_id"></a> leader_ami_id | Id of the AMI | string | "" | no |
<a name="input_leader_associate_public_ip_address"></a> leader_associate_public_ip_address | Associate public IP address to the leader | bool | true | no |
<a name="input_leader_custom_setup_base64"></a> leader_custom_setup_base64 | Custom bash script encoded in base64 to setup the leader | string | "" | no |
<a name="input_leader_instance_type"></a> leader_instance_type | Instance type of the cluster leader | string | "t2.medium" | no |
<a name="input_leader_jvm_args"></a> leader_jvm_args | JVM Leader JVM_ARGS | string | " -Xms2g -Xmx2g -XX:MaxMetaspaceSize=256m -XX:+UseG1GC -XX:MaxGCPauseMillis=100 -XX:G1ReservePercent=20 " | no |
<a name="input_leader_monitoring"></a> leader_monitoring | Enable monitoring for the leader | bool | true | no |
<a name="input_leader_tags"></a> leader_tags | Tags of the cluster leader | map | {} | no |
<a name="input_loadtest_dir_destination"></a> loadtest_dir_destination | Path to the destination loadtest directory | string | "/loadtest" | no |
<a name="input_loadtest_dir_source"></a> loadtest_dir_source | Path to the source loadtest directory | string | n/a | yes |
<a name="input_loadtest_entrypoint"></a> loadtest_entrypoint | Path to the entrypoint command | string | "bzt -q -o execution.0.distributed=\"{NODES_IPS}\" *.yml" | no |
<a name="input_name"></a> name | Name of the provision | string | n/a | yes |
<a name="input_nodes_ami_id"></a> nodes_ami_id | Id of the AMI | string | "" | no |
<a name="input_nodes_associate_public_ip_address"></a> nodes_associate_public_ip_address | Associate public IP address to the nodes | bool | true | no |
<a name="input_nodes_custom_setup_base64"></a> nodes_custom_setup_base64 | Custom bash script encoded in base64 to setup the nodes | string | "" | no |
<a name="input_nodes_instance_type"></a> nodes_instance_type | Instance type of the cluster nodes | string | "t2.medium" | no |
<a name="input_nodes_jvm_args"></a> nodes_jvm_args | JVM Nodes JVM_ARGS | string | "-Xms2g -Xmx2g -XX:MaxMetaspaceSize=256m -XX:+UseG1GC -XX:MaxGCPauseMillis=100 -XX:G1ReservePercent=20 -Dnashorn.args=--no-deprecation-warning -XX:+HeapDumpOnOutOfMemoryError " | no |
<a name="input_nodes_monitoring"></a> nodes_monitoring | Enable monitoring for the leader | bool | true | no |
<a name="input_nodes_size"></a> nodes_size | Total number of nodes in the cluster | number | 2 | no |
<a name="input_nodes_tags"></a> nodes_tags | Tags of the cluster nodes | map | {} | no |
<a name="input_region"></a> region | Name of the region | string | "us-east-1" | no |
<a name="input_split_data_mass_between_nodes"></a> split_data_mass_between_nodes | Split data mass between nodes | <pre>object({<br> enable = bool<br> data_mass_filename = string<br> })</pre> | <pre>{<br> "data_mass_filename": "../plan/data/data.csv",<br> "enable": false<br>}</pre> | no |
<a name="input_ssh_cidr_ingress_blocks"></a> ssh_cidr_ingress_blocks | SSH user for the leader | list | <pre>[<br> "0.0.0.0/0"<br>]</pre> | no |
<a name="input_ssh_export_pem"></a> ssh_export_pem | n/a | bool | false | no |
<a name="input_ssh_user"></a> ssh_user | SSH user for the leader | string | "ec2-user" | no |
<a name="input_subnet_id"></a> subnet_id | Id of the subnet | string | n/a | yes |
<a name="input_tags"></a> tags | Common tags | map | {} | no |
<a name="input_taurus_version"></a> taurus_version | Taurus version | string | "1.16.0" | no |
<a name="input_web_cidr_ingress_blocks"></a> web_cidr_ingress_blocks | web for the leader | list | <pre>[<br> "0.0.0.0/0"<br>]</pre> | no |
Outputs
Name | Description |
---|---|
<a name="output_leader_private_ip"></a> leader_private_ip | The private IP address of the leader server instance. |
<a name="output_leader_public_ip"></a> leader_public_ip | The public IP address of the leader server instance. |
<a name="output_nodes_private_ip"></a> nodes_private_ip | The private IP address of the nodes instances. |
<a name="output_nodes_public_ip"></a> nodes_public_ip | The public IP address of the nodes instances. |