How can we help?

Design? Check. Strategy? Check. Bulletproof code? Check! People who can manage it in an agile and efficient manner? Check. Someone to help you create your next big product? Of course.

Denver

- 1201 18th St. Suite 250, Denver, CO 80202

Grand Rapids

- 125 Ottawa Ave NW Suite 270, Grand Rapids, MI 49503

Blog

DeployingAnASP.NETCoreMicroservice Blog

Deploying an ASP.NET Core Microservice on AWS Lambda

Introduction
Serverless cloud offerings continue to gain traction with Microsoft Azure, Google and others following Amazon’s Lambda’s preview introduction in 2014. The term “serverless” implies that the details of managing where code is run are abstracted away from the tasks of developing and deploying the code. Consumers of serverless offerings benefit from:
- Reduced costs - Pricing is typically in terms of the actual time code is running and memory allocated rather than paying for an “always on” infrastructure regardless of actual load.
- Simplified Administration - The scaling of the infrastructure to handle varying load is handled transparently by the service provider.

AWS recently added support for the .NET Core 1.0 to their Lambda service. This blog post and accompanying guide/source code on GitHub will provide a quick walkthrough of deploying an existing ASP.NET Core microservice API to AWS Lambda. Here’s a quick diagram of where we’re headed:

cloudcraft - Asp.net Core on Lambda

Our Simple Microservice API

While the focus of this post isn’t on ASP.NET core, let’s take a quick look at the microservice we’re starting with and want to deploy on Lambda. It exposes three simple endpoints using some static data from the Star Wars API.

  • GET /api/starwars/characters?limit=5
  • GET /api/starwars/characters/search/skywalker
  • POST /api/starwars/characters with a JSON body like this { “name”: “Harry Skywalker”, “eye_color”: “blue” }

There’s also an accompanying test project which is kept separate from the API project so we don’t inflate size of the microservice assembly that will be deployed to AWS.

Prerequisites and Setup
We’re going to be using the .NET and AWS command line tools to build, publish and deploy to Lambda. Make sure that you’ve:

  • Installed the .NET Core command line interface
  • Installed and configured the AWS command line interface
    Important! For this guide you’ll want to ensure the AWS account you’re using has permission to create S3 buckets, API Gateway resources, Cloud Formation stacks and, of course, Lambda Functions.
  • Cloned the repository from GitHub

Once you’ve verified your tooling is setup and have cloned the repository, try simply running the microservice locally:

# Run from the root of the repository
dotnet restore
dotnet run -p StarWarsMicroservice/
# Open http://localhost:5000/api/starwars/characters/search/skywalker

Check out Step 1 of the guide for some additional details if needed.

Preparing our Microservice for AWS Lambda AWS Lambda Functions do not listen directly for incoming HTTP requests. They are simple blocks of functionality that can be triggered by a wide variety of events that might occur within an AWS infrastructure. AWS provides another service offering, API Gateway, that does listen for HTTP requests and can forward them onto Lambda functions. It’s important to understand that API Gateway is performing a translation step of the incoming HTTP request to a “request-like” JSON structure. The details and an example are covered in “Input Format of a Lambda Function for Proxy Integration” but the basic structure looks like this:

{
"resource": "Resource path",
"path": "Path parameter",
"httpMethod": "Incoming request's method name"
"headers": {Incoming request headers}
"queryStringParameters": {query string parameters }
"pathParameters": {path parameters}
"stageVariables": {Applicable stage variables}
"requestContext": {Request context, including authorizer-returned key-value pairs}
"body": "A JSON string of the request payload."
"isBase64Encoded": "A boolean flag to indicate if the applicable request payload is Base64-encode"
}

There is a similar process with the outgoing HTTP response; the Lambda function is expected to provide a particular JSON response from which the API gateway will send the actual HTTP response message. This is covered in “Output Format of a Lambda Function for Proxy Integration”.

Fortunately, there are convenient shims to handle this transformation between the expected input/output formats for Lambda proxy integration and popular web development frameworks. For example node’s Express framework has lambda-express and for our ASP.NET Core app Amazon.Lambda.AspNetCoreServer.

We’ll update the project.json file with the dependency “Amazon.Lambda.AspNetCoreServer”: “0.8.4-preview1”, run dotnet restore to grab the package and then add a new class that will serve as the AWS Lambda entrypoint to our microservice app.

using Amazon.Lambda.AspNetCoreServer;
using Microsoft.AspNetCore.Hosting;
using StarWarsMicroservice;
using System.IO;
namespace StarWarsMicroservice
{
public class LambdaGateway : APIGatewayProxyFunction
{
protected override void Init(IWebHostBuilder builder)
{
builder
.UseApiGateway()
.UseContentRoot(Directory.GetCurrentDirectory())
.UseStartup<Startup>();
}
}
}

The APIGatewayProxyFunction class from which this extends contains the actual entry point, FunctionHandlerAsync, that AWS Lambda will call into. This Init implementation sets up the routing of the API Gateway proxy structure into the routing and application configuration from our existing Startup class. Have a look at the Step 2 of the guide for additional details.

Deploying to AWS
Now we’re ready to deploy our microservice to AWS Lambda. Create a new S3 bucket where we can push the deployment package for our microservice. Using this the aws command line for instance:

# Remember the S3 bucket namespace is global so you should *choose your own name*
aws s3 mb s3://starwars-api-lambda

Next we’ll instruct dotnet to prepare publish directory with our compiled microservice and dependencies:

# Run from the root of the repository
# This will upload our published microservice directory to S3
# Be sure to specify the name of the S3 bucket you created earlier
aws cloudformation package \
--template-file StarWarsMicroservice/starwars-api-template.yml \
--output-template-file StarWarsMicroservice/serverless-output.yaml \
--s3-bucket starwars-api-lambda
# This will generate the API Gateway and Lambda Function described in our template
aws cloudformation deploy \
--template-file StarWarsMicroservice/serverless-output.yaml \
--stack-name starwars-api \
--capabilities CAPABILITY_IAM__text in bold__

Finally we’re ready to test our API on AWS. Sign into the AWS API Gateway console to see a new star wars API created. The URL for the endpoint should look like this:

https://<YOUR_API_ID>.execute-api.<YOUR_REGION>.amazonaws.com/Prod/

Now go ahead and give it a whirl. See if you can find Darth Vader:

https://<YOUR_API_ID>.execute-api.<YOUR_REGION>.amazonaws.com/Prod/api/starwars/characters/search/vader

There’s bonus step for automating the deployment to AWS in Step 4 of the guide as well as some links for further reading.

Wrapping Up
While we looked today at deploying an ASP.NET Core microservice on AWS Lambda, Python, NodeJS and Java are also supported (and have been longer than .NET). In keeping with the theory of a microservice architecture, one could potentially mix and match a combination of microservices written in the most appropriate language for the job into a single exposed API.
Keep in mind that exposing an ASP.NET microservice in this way, giving you the advantage of easily running in a local development environment, will add some processing/memory overhead relative to a more simple Lambda function implementation.

The use of serverless architectural patterns, also available on Microsoft Azure and Google Cloud Platform, offer a potential cost savings and ease of management over both traditional in-house infrastructure as well as cloud hosted infrastructure. In addition to handling API Gateway events, AWS Lambdas can respond to a variety of other events in the AWS ecosystem such as S3 file uploads, SNS messages, etc. Refer to the AWS Lambda docs for the most up to date details on the service. While every situation is unique, serverless architecture should be part of the ongoing conversation in how to approach your IT architecture and strategy.


The full scope of API gateway functionality is well outside the scope of this post but it also offers functionality such as SSL termination and caching.