AWS S3 Lambda Integration with Dynamic Path

Hello everyone, In this blog post I will discuss how you can implement AWS Lambda with S3 where Object Stored in S3 does not have any specific “Prefix” or “Suffix”.

To Understand this scenario, consider an E-Commernce website, where from backend user can upload any media assets

Use Case for E-Commerce

  • Videos files which are not natively supported by browser.
  • Video supported by browser but they big in size and would cause delay loading in mobile/browser or would cause problem when user trying to access your application on low network.

To handle such scenario, you can create lambda function which would convert your assets to other browser friendly format.,

But you may face problem when you are uploading your assets in any some random folder which does not have specific suffix or prefix. eg {vendorId}/assets/media/{timestamp}_media_name.mov

In above case, it would become very difficult to create prefix/suffix trigger, As AWS does not allows to create trigger based on regex matching or some partial path matching.

As prefix is random, so you will not be able to create trigger for lambda based on prefix.

To create trigger for lambda, based suffix (eg. trigger lambda having suffix as .mov) then you would have to create trigger for entire bucket, so any file uploaded on S3 with .mov extension will trigger this lambda, this may not be ideal or cost effective.

Solution to Asset with Dynamic Path

  1. Update your S3 structure.
  2. Modify lambda function to handle regex matching.
  3. Update S3 Filename while uploading.
  4. With AWS SQS

Update your S3 Structure

In this, you re-structure your S3 so that it becomes manageable to create trigger for Lambda function. But they require efforts to update all other modules where you are using this S3 path.

Modify lambda function to handle regex matching.

In this approach, you create trigger based on “suffix” i.e. .mov but before executing your core logic, you check the path with your custom regex pattern matching.
This would incur additional charges for lambda invocation, but you will not have to make any changes in your other modules.

This could be implemented as short-term solutions, while you decide how you want to handle in other cases.

Another variation of this approach would be to create other lambda function, which would sit between your original lambda function and s3 trigger. This new lambda function would trigger your original lambda after checking pre-condition or regex pattern matching on your S3 key. This approach would allow you to keep your existing lambda function as it is.

Kind of like middleware/interceptor between S3 and your destination lambda function.

Update S3 Filename while uploading

This is also quite simple, but this would require code change where you are uploading assets to S3.

Once you are creating upload URL for S3(Pre-Signed URL), in that you would change the path to include additional string in its sub-path. eg. if original upload path is {vendorId}/assets/media/{timestamp}_media_name.mov then you would change this to {vendorId}/assets/media/{timestamp}_media_name.vendor-asset.mov

Now you could create trigger for suffix with.vendor-asset.mov.

This approach is quite flexible as well, where as a developer you would get control on what asset you want to trigger your lambda function. e.g., you create two lambda function where one just converts the assets and stores in S3 and another one also moves this asset to different locations (different s3 bucket, different location on same s3 bucket etc.).

With AWS SQS

In this approach, you would create separate queue on AWS SQS, and now for each asset you application want to trigger lambda, it would also make entry in this queue as well. and you would create trigger from AWS SQS to lambda rather than AWS S3 to Lambda.

Thanks for reading…

Leave a Reply