Uselatest – a Cloudformation macro to always use the latest version of a Lambda Layer

One of the drawbacks of using a Lambda Layer is that you must declare it by its full version. This is a hassle as every time you update a Layer, you need to update its  declaration in every stack to get the latest updates. It would be much better if one could specify it only by its name (similar as with the FunctionName when declaring event source mapping). That is, instead of arn:aws:lambda:us-east-1:123456789012:layer:my-layer:24 just use my-layer.

I made a Cloudformation macro to do just that.

Uselatest scans through a Cloudformation template and replaces occurrences of Lambda Layers that are not fully qualified with an ARN of the latest available version of that Layer. This way you don’t have to think about updating a template after updating a Layer. The latest version will automatically get picked up during stack deployment. Magic. ✨

The macro works in all the places where you can declare a Layer. Check the Example section for more.

I wanted to make it available in the Serverless App Repo, but sadly, a Cloudformation Macro is not a supported resource. You’ll have to build, package and deploy it yourself if you want to use it.

Unit testing AWS services in Python

Consider the following piece of code:

import boto3
Table = boto3.resource('dynamodb').Table('foo')
def get_user(user_id):
ddb_response = Table.get_item(Key={'id': user_id})
return ddb_response.get('Item')
view raw hosted with ❤ by GitHub

It’s a contrived example that just reads an item of data from a DynamoDB table. How would you write a unit test for the get_user function?

My favourite way to do so is to combine pytest fixtures and botocore’s Stubber:

from botocore import Stubber, ANY
import pytest
import models
def ddb_stubber():
ddb_stubber = Stubber(models.Table.meta.client)
yield ddb_stubber
def test_user_exists(ddb_stubber):
user_id = 'user123'
get_item_params = {'TableName': ANY,
'Key': {'id': user_id}}
get_item_response = {'Item': {'id': {'S': user_id},
'name': {'S': 'Spam'}}}
ddb_stubber.add_response('get_item', get_item_response, get_item_params)
result = main.get_user(user_id)
assert result.get('id') == user_id
def test_user_missing(ddb_stubber):
user_id = 'user123'
get_item_params = {'TableName': ANY,
'Key': {'id': user_id}}
get_item_response = {}
ddb_stubber.add_response('get_item', get_item_response, get_item_params)
result = main.get_user(user_id)
assert result is None
view raw hosted with ❤ by GitHub

There’s couple of things to note here.

First, I’m using the wonderful scope functionality of pytest fixtures. This allows me to create a new fixture per every test function execution. It is necessary for Stubber to work correctly.

The Stubber needs to be created with the correct client. Since I’m using a DynamoDB Table instance in, I have to access its client when creating the Stubber instance.

Notice also the “verbose” get_item_response structure in the first test. That’s because of how the DynamoDB client interacts with DynamoDB API (needless to say, this is DynamoDB specific). The Table is a layer of abstraction on top of this, it converts between DynamoDB types and Python types. However it still uses the client underneath, so it expects this structure nevertheless.

Finally, it’s good practice to call assert_no_pending_response to make sure the tested code actually did make the call to an AWS service.

I really like this combination of pytest and Stubber. It’s a great match for writing correct and compact tests.