Awesome
🧪 Load Testing Framework
Installation
Before running the tests locally, you must ensure you have k6
installed and have the necessary environmental variables set. Also, installing the required npm modules and compiling the typescript tests (.ts
) into Javascript files (.js
) under dist/ is necessary in order to make the tests runnable by k6
.
The following steps help you achieve all these in one go.
Windows Setup
-
Set your values for the environmental variables in local.ps1.
-
Source the PowerShell script to set up the environment and dependencies:
. .\scripts\setup\local.ps1
Unix-based OS Setup
-
Set your values for the environmental variables in local.sh.
-
Make the script executable and source it:
chmod +x ./scripts/setup/local.sh && source ./scripts/setup/local.sh
Running the Tests
Execute the tests based on your environment:
- Local Machine:
npm run test:local path/to/test
- k6 Cloud Dashboard:
npm run test:cloud path/to/test
- CI Environment:
npm run test:ci path/to/test
Developing a New Test
Step 1: Set the Load
Define the load for the tests in load.js
, using the filename as the key.
For load scenario options, refer to the k6 options documentation.
Step 2: Define Test Endpoints
Specify the endpoints to test in endpoints.js.
Step 3: Define Request Functions
Create request functions (e.g., POST
, GET
) for your endpoints under requests/.
Step 4: Mandatory Environmental Variables
See configure.js for mandatory environmental variables needed for test execution.
Step 5: Test Example
Here's a template for a typical load test:
// imports
// K6 Phase Options
export const options = {
ext: {
loadimpact: {
// ...
},
},
thresholds: {
// ...
},
scenarios: {
jsonPlaceholderPosts: {
executor: "ramping-vus",
gracefulStop: "30s",
stages: load.jsonPlaceholder.getPosts,
gracefulRampDown: "30s",
},
},
};
export function setup() {
// ...
}
// Testcase
export default function () {
group("Get all Posts", () => {
jsonPlaceholderRequests.getPosts();
sleep(1);
});
group("Create a Post", () => {
const payload = {
userId: 1,
title: "Good post!",
body: "This is a good post.",
};
jsonPlaceholderRequests.createPost(payload);
sleep(1);
});
}
Analyzing the Report
The test output will include various metrics:
To help you understand each of these metrics better:
-
data_received
: The total amount of data received from the target server during the test. It's shown in kilobytes and the rate per second. -
data_sent
: The total amount of data sent to the target server. This includes all HTTP request data sent by k6. -
group_duration
: The average, minimum, median, maximum, 90th percentile, and 95th percentile durations of the named groups in your test script. Groups are a way to organize scenarios in k6. -
http_req_blocked
: The time spent blocked before initiating the request. This can include time spent waiting for a free TCP connection from a connection pool if you're hitting connection limits. -
http_req_connecting
: The time spent establishing TCP connections to the server. If this is high, it could indicate network issues or server overload. -
http_req_duration
: The total time for the request. This includes sending time, waiting time, and receiving time. The detailed breakdown is provided for expected responses (expected_response). -
http_req_failed
: The percentage of failed requests. Ideally, this should be 0%. -
http_req_receiving
: The time spent receiving the response from the server after the initial request was sent. -
http_req_sending
: The time spent sending the HTTP request to the server. This typically is a small number. -
http_req_tls_handshaking
: Time spent performing the TLS handshake. If your request uses HTTPS, this includes the time taken to negotiate the SSL/TLS session. -
http_req_waiting
: The time spent waiting for a response from the server (also known as Time to First Byte, TTFB). This doesn't include the time taken to download the response body. -
http_reqs
: The total number of HTTP requests made during the entire test. -
iteration_duration
: The time it takes to complete one full iteration of the main function in your script. -
iterations
: The total number of times the main function was executed. -
vus
: The number of Virtual Users (VUs) actively executing during the current test step. -
vus_max
: The maximum number of concurrently active VUs during the test.
Step 6: Cleanup (Optional)
For cleanup, use scripts in cleaners. These can be manually triggered or automatically in tearDown()
.