Want to Contribute to us or want to have 15k+ Audience read your Article ? Or Just want to make a strong Backlink?

AWS Lambda Extensions with Rust



What are Lambda Extensions?

AWS Lambda Extensions were introduced some time in the past, with the aim of offering a strategy to combine with the Lambda execution lifecycle. They provide an answer for with the ability to improve our Lambda features with our current monitoring, observability, safety, and governance instruments. This may be completed by creating some code that runs in parallel with our Lambda operate and listens to occasions emitted by the Lambda runtime. These occasions may be lifecycle occasions comparable to Init, Invoke, Shut Down occasions, telemetry occasions, and even log occasions, generated every time the Lambda features write a message to its normal output.

There isn’t a advanced set up or configuration for us to make use of Lambda Extensions. They’re normally deployed as Lambda Layers, which may be simply hooked up to a Lambda operate.

Some use circumstances for extensions are the next:

  • seize diagnostic data earlier than, throughout, and after operate invocation;
  • code instrumentation with out the needing to alter our Lambda code;
  • fetching configurations, and secrets and techniques earlier than the invocation of the operate. With the ability to cache these values between consecutive executions;
  • detecting and alerting on operate exercise by means of hardened safety brokers, which might run as separate processes from the operate

We will have 2 types of extensions:

  • Inner extensions: they’re a sort of extension that runs throughout the runtime course of as a separate thread. The runtime controls the beginning and stops of the inner extension. An alternate method for integrating with the Lambda atmosphere is to make the most of language-specific atmosphere variables and wrapper scripts.
  • Exterior extensions: they run in parallel with the operate throughout its execution. Additionally they proceed working after the operate execution has been accomplished, providing the flexibility to assemble telemetry and monitoring details about the operate. Their means of working is just like the sidecar pattern present in containerized and extremely distributed techniques.

Within the upcoming strains, we’ll talk about the best way to construct exterior extensions for our features. Exterior extensions present increased flexibility in comparison with inner extensions, however in addition they include their caveats.



Why Rust?

Extensions may be developed in any language of selection. Since exterior extensions run in parallel with the precise operate, they are often written in a completely completely different programming language in comparison with the one chosen for the lambda. Whereas it is good to have this freedom of selection, AWS documentation recommends utilizing a compiled language. The explanations for this are the next:

  • extensions written in a compiled language are constructed as a self-contained binary. That is nice since they will work with any Lambda runtime, so long as we take note of the structure. We will construct the identical extension and deploy it to each x86-64 and arm64;
  • extensions can have an effect on the efficiency of the Lambda operate. They share assets with the Lambda Runtime, so it’s endorsed for them to be performant as potential to keep away from interfering with the Lambda.

In addition to these two causes, a 3rd purpose could be the presence of nice tooling. Whereas creating extensions on our native machine remains to be a problem, now we have some nice open-source instruments and libraries to alleviate among the hurdles. On this weblog submit, we’ll use cargo-lambda to bootstrap and deploy an extension undertaking.



Let’s Bootstrap and Deploy an Extension

We’ll use cargo-lambda for our extension. cargo-lambda is an open-source device, the aim of which is to assist builders to construct Lambda Capabilities and Lambda Extensions. It will possibly generate and bootstrap Rust initiatives for each lambdas and extensions. It will possibly assist to construct and compile these initiatives. That is essential since lambdas and extensions should be cross-compiled to a Linux executable binary. cargo-lambda makes this seamless from each Home windows and macOS environments.

Step one could be to create a Rust undertaking for our extension. This may be completed with the next cargo-lambda command:

cargo lambda new --extension project-name --logs
Enter fullscreen mode

Exit fullscreen mode

Discover that --logs flag on the finish of the command. The presence of this flat will make cargo-lambda generate a undertaking with occasions wanted for the Logs API. An alternative choice could be --telemetry flag, which might bootstrap a undertaking with the telemetry API calls. We will select any of these, the principle distinction could be the preliminary, generated Rust code. Since all of the required dependencies for the extension are in the identical crate (which is added as a dependency), we will simply merely remodel our undertaking as we want afterward.

We will construct our undertaking utilizing the next command:

cargo lambda construct --extension --release
Enter fullscreen mode

Exit fullscreen mode

This may construct our extension in launch mode, concentrating on x86-64 structure (even when we’re on a Mac M1). If we need to construct it for arm64, we will add the --arm flag ultimately.

Now that now we have an current binary constructed, we’d need to deploy this binary to AWS. We will do that utilizing one other cargo-lambda command:

cargo lambda deploy --extension
Enter fullscreen mode

Exit fullscreen mode

This may deploy our extension to AWS in type of a Lambda Layer. As we have been discussing beforehand, our extension ought to be capable to run beside any Lambda Runtime. By default, the deploy command will solely allow compatibility for offered.al2 runtime (primarily Rust, or every other compiled Lambda operate). To allow it for different runtimes comparable to NodeJS or Python, we will add the --compatible_runtimes flag, for instance:

cargo lambda deploy --extension --compatible_runtimes=offered.al2,nodejs16.x,python3.9
Enter fullscreen mode

Exit fullscreen mode

A complete checklist with all of the appropriate runtimes may be discovered within the AWS documentation. As a aspect notice, I’ve to say that this function for supporting different runtimes, was carried out on my own for the cargo-lambda undertaking. I hope that different folks will discover it as helpful 🙂

The final step could be to connect our extension to an current Lambda operate. This may be accomplished from the AWS console by merely attaching a Lambda Layer to a operate.

By following these steps, now we have created and deployed an extension that does primarily nothing helpful. Transferring on, we’ll develop an extension that listens to Lambda log messages and sends them to a Kinesis Firehose stream.



Develop a Log Router Extension for Kinesis Firehose

Many organizations make use of a log aggregator framework. The rationale for that is to have the ability to have each log message in a single place for simpler operational and help duties, debugging, and even authorized functions. By default, Lambda features are utilizing CloudWatch for logging. To combine with one other log aggregator, extensions are the right resolution. The truth is, many current log aggregator merchandise are already offering ready-to-use Lambda Extensions. For instance, AWS companions comparable to Datadog, Dynatrace, Honeycomb, Sumo Logic, and so on. have their extensions revealed publicly, a few of them having their code open for everyone. A complete checklist of companions may be discovered within the AWS docs.

In case we use an internally developed log aggregator, or the product employed doesn’t present an extension out of the field, we will create one ourselves. Within the following strains, we’ll see the best way to construct an extension that integrates with Kineses Firehose and saves our log messages into an S3 bucket.

Within the earlier sections, we have already seen the best way to bootstrap and deploy an extension. To have the ability to ship messages to Kinesis, we will develop the next code:

use aws_sdk_firehose::error::PutRecordError;
use aws_sdk_firehose::mannequin::Report;
use aws_sdk_firehose::output::PutRecordOutput;
use aws_sdk_firehose::sorts::{Blob, SdkError};
use aws_sdk_firehose::Shopper;
use lambda_extension::{service_fn, Error, Extension, LambdaLog, LambdaLogRecord, SharedService};
use lazy_static::lazy_static;
use std::env;

static ENV_STREAM_NAME: &str = "KINESIS_DELIVERY_STREAM";

// Learn the stream identify from an atmosphere variable
lazy_static! {
    static ref STREAM: String = env::var(ENV_STREAM_NAME).unwrap_or_else(|e| panic!(
        "Couldn't learn atmosphere variable {}! Motive: {}",
        ENV_STREAM_NAME, e
    ));
}

#[tokio::main]
async fn principal() -> Outcome<(), Error> {
    println!("Loading extension...");
    // Register the handler to our extension
    let logs_processor = SharedService::new(service_fn(handler));

    Extension::new()
        .with_logs_processor(logs_processor)
        .run()
        .await?;

    Okay(())
}

async fn handler(logs: Vec<LambdaLog>) -> Outcome<(), Error> {
    // Construct the Kinesis Firehose shopper
    let firehose_client = build_firehose_client().await;
    // Hearken to all of the occasions emitted when a Lambda Operate is logging one thing. Ship these
    // occasions to a Firehose supply stream
    for log in logs {
        match log.report {
            LambdaLogRecord::Operate(report) | LambdaLogRecord::Extension(report) => {
                put_record(&firehose_client, STREAM.as_str(), &report).await?;
            }
            _ => (),
        }
    }
    Okay(())
}

// Construct the Firehose shopper
async fn build_firehose_client() -> Shopper {
    let region_provider = RegionProviderChain::default_provider();
    let shared_config = aws_config::from_env().area(region_provider).load().await;
    let shopper = Shopper::new(&shared_config);
    shopper
}

// Ship a message to the Firehose stream
async fn put_record(
    shopper: &Shopper,
    stream: &str,
    information: &str,
) -> Outcome<PutRecordOutput, SdkError<PutRecordError>> {
    let blob = Blob::new(information);

    shopper
        .put_record()
        .report(Report::builder().information(blob).construct())
        .delivery_stream_name(stream)
        .ship()
        .await
}
Enter fullscreen mode

Exit fullscreen mode

The code itself is fairly self-explanatory. In addition to having some preliminary boilerplate code to register our extension handler, what we’re doing is listening to log occasions and sending these occasions to a Firehose supply stream. The stream will batch the incoming occasions and save them in an S3 bucket.

By way of IAM permissions, we have to give Firehose write permission to the Lambda Operate. We can not give permissions to a Lambda Layer. Since our code for the extension is working beside the lambda, all of the permissions utilized to lambda can be found for the extensions as nicely.



Placing these all collectively

Creating and deploying Lambda Extensions may be tedious work as now we have seen above. To make our life simpler (and likewise for myself to supply a reproducible instance of what I used to be speaking about earlier than), we will write some IaC Terraform code for the entire deployment course of.

A working instance of a Lambda Operate with a Rust Extension may be discovered on my GitHub web page: https://github.com/Ernyoke/lambda-log-router. It’s a Terragrunt undertaking requiring a present model of Terraform (>1.3.0) and Rust (>1.6.3).



References

  1. Introducing AWS Lambda Extensions: https://aws.amazon.com/blogs/compute/introducing-aws-lambda-extensions-in-preview
  2. Lambda Extension API: https://docs.aws.amazon.com/lambda/latest/dg/runtimes-extensions-api.html
  3. Getting essentially the most of AWS Lambda free compute – wrapper scripts: https://style-tricks.com/aws-builders/getting-the-most-of-aws-lambda-free-compute-wrapper-scripts-3h4b
  4. Sidecar sample: https://learn.microsoft.com/en-us/azure/architecture/patterns/sidecar
  5. cargo-lambda: https://github.com/cargo-lambda/cargo-lambda
  6. Lambda Logs API: https://docs.aws.amazon.com/lambda/latest/dg/runtimes-logs-api.html
  7. Extension Companions: https://docs.aws.amazon.com/lambda/latest/dg/extensions-api-partners.html

Add a Comment

Your email address will not be published. Required fields are marked *

Want to Contribute to us or want to have 15k+ Audience read your Article ? Or Just want to make a strong Backlink?