AWS S3 — Serverless image upload and thumbnail creation.

Scenario

Prince Francis
6 min readMay 17, 2021
  1. Our application has a different authentication mechanism for login. Google, Facebook, custom login etc.
  2. Customers upload images after login
  3. Storing images in Amazon S3
  4. Generating thumbnails for each images to use in the list view

Traditional way

Uploading images into S3 from browser — Traditional way

This approach has some drawbacks

  1. Bandwidth usage of application server — Since we upload the images first to application server, it consumes a major part of bandwidth
  2. CPU usage — Needs comparatively high CPU to process images and generate thumbnail
  3. Third Party software dependence — We should install some image manipulation tools on the server.

To avoid such headaches, we can go for the serverless architecture

Server-less image upload and process

Uploading images to S3 — Server-less way

Steps to upload images directly to AWS S3

  1. Create a policy which has permission to getobject from s3 and put objects into s3.
  2. Create a role with above policy.
  3. Create a lambda function with above role.
  4. Add triggers to notify lambda when uploading png, jpg and jpeg images into a certain bucket.
  5. Create a layer for sharp module.
  6. Write the code to generate thumbnail.
  7. Create a temporary url to put the object into S3.
  8. Put object using the above url.
  9. Check the bucket for thumbnail

Prerequisites — If you don’t have a bucket then create one.

1. Create policy with necessary permissions.

When you create a policy, create one with only necessary permissions. No more no less. Following are the permissions required for our purpose.

After login to aws console, go to IAM > Policies > Create

https://console.aws.amazon.com/iam/home?region=us-west-2#/policies$new?step=edit

You may change the region if required. Select following permissions from the list.

Permissions required for image uploading and processing.

Select your bucket as the resources for these policy. So that our lambda functions don’t affect other buckets.

2. Create a role with above policy.

Now got to IAM > Role > Create and create a role and assign the policy which we created at step — 1. Remember the name of these Role, to use later.

Allows Lambda functions to call AWS services on your behalf. — Select these options while creating role.

3. Create a lambda function with above role.

Go to Lambda > Create and create a lambda function. Select the above role while creating the lambda function.

4. Add triggers to notify lambda when uploading png, jpg and jpeg images into a certain bucket.

Now create trigger, so the when an image is uploaded to our bucket, it’ll invoke our lambda function.

Trigger for jpg images

I created above trigger so that our function will get invoked when an image org.jpg is uploaded into any subfolders of resources/medias of cms-v3 bucket.

5. Create a layer for sharp module.

Execute the below code in you local machine to create a zip bundle of sharp library.

rm -r ~/Temp
mkdir ~/Temp
cd ~/Temp
mkdir nodejs ## don't use any other name, only use nodejs
cd nodejs
npm install sharp@0.28.1
## If using MAC then npm install --arch=x64 --platform=linux --target=12.13.0 sharp@0.28.1
cd ..
zip -r sharp.zip .

Now upload the sharp.zip file into aws s3 bucket (to any folder) and add a layer Lambda > Layers > Add Layer

Creating a layer for sharp module.

Now go back to previous lambda function and create a layer there by clicking on add layer

Assigning layer to lambda function.

6. Write the code to generate thumbnail.

Copy past the below code to index.js in the code section of lambda function. Change width at line 11 to whatever you want.

/* eslint-disable */ 
// dependencies
const AWS = require('aws-sdk');
const util = require('util');
const sharp = require('sharp');
// get reference to S3 client
const s3 = new AWS.S3();
// set thumbnail width. Resize will set the height automatically to maintain aspect ratio.
const width = 720;
exports.handler = async (event, context, callback) => {
// Read options from the event parameter.
console.log("Reading options from event:\n", util.inspect(event, {depth: 5}));
const srcBucket = event.Records[0].s3.bucket.name;
// Object key may have spaces or unicode non-ASCII characters.
const srcKey = decodeURIComponent(event.Records[0].s3.object.key.replace(/\+/g, " "));
const fileName = srcKey.replace(/^.*[\\\/]/, '');
const dstBucket = srcBucket;// + "-resized";
const folderMatch = srcKey.match(/(.*)[\/\\]/);
let folderName = '';
if(folderMatch) {
folderName = `${folderMatch[1]||''}/`;
}
// Infer the image type from the file suffix.
const typeMatch = srcKey.match(/\.([^.]*)$/);
if (!typeMatch) {
console.log("Could not determine the image type.");
return;
}
// Check that the image type is supported
const imageType = typeMatch[1].toLowerCase();
const dstKey = `${folderName}w${width}.${imageType}`;if (!["jpg", "png", "jpeg"].includes(imageType)) {
console.log(`Unsupported image type: ${imageType}`);
return;
}
// Download the image from the S3 source bucket.try {
const params = {
Bucket: srcBucket,
Key: srcKey
};
var origimage = await s3.getObject(params).promise();
} catch (error) {
console.log(error);
return;
}
let imageBuffer = origimage.Body;
// Use the Sharp module to resize the image and save in a buffer.
try {
var buffer = await sharp(imageBuffer).resize(width).withMetadata().toBuffer();

} catch (error) {
console.log(error);
return;
}
// Upload the thumbnail image to the destination bucket
// ACL:'public-read'
// const acl = fileName.startsWith('public_org') ? 'public-read' : 'private';
const acl = 'private';
try {
const destparams = {
Bucket: dstBucket,
Key: dstKey,
Body: buffer,
ContentType: "image",
ACL:acl
};
const putResult = await s3.putObject(destparams).promise();

} catch (error) {
console.log(error);
return;
}

console.log('Successfully resized ' + srcBucket + '/' + srcKey +
' and uploaded to ' + dstBucket + '/' + dstKey);
return srcKey;
};

Deploy these code. Create a folder resources/medias in your bucket. Upload an image org.jpg (it should be the name of image) into any subfolders of above image. You could see another image created w720.jpg.

Thumbnail generated.

7. Create a temporary url to put the object into S3.

Now create a server side API, which gives a temporary url to upload images into S3. We should create a user in aws which have permission to upload files into the above created bucket.

Add user with below policy

Create user to generate temp url

Now create access key and secret key from Users > Security Credentials > Create Access key

Now we have following parameters.

AWS_BUCKET_NAME=
AWS_REGION=us-west-2
AWS_ACCESS_KEY_ID=
AWS_SECRET_ACCESS_KEY=
AWS_MEDIA_BASE_PATH=resources/medias/
TEMP_PUT_URL_EXPIRY=300

Edit the CORS policy of bucket

  • Select the bucket in S3 and Open permissions tab
  • Scroll down and click on Edit button of CORS section
  • Add the below code and save
[
{
"AllowedHeaders": [
"*"
],
"AllowedMethods": [
"GET",
"PUT",
"POST"
],
"AllowedOrigins": [
"*"
],
"ExposeHeaders": []
}
]
  • Use the below code to generate temporary put url
const key = 'some-sub-folders/org.jpg';
const credentials = {
AWS_ACCESS_KEY_ID,
AWS_SECRET_ACCESS_KEY
};
const s3ObjectUrl = parseUrl(`https://${AWS_BUCKET_NAME}.s3.${region}.amazonaws.com/${AWS_MEDIA_BASE_PATH}${key}`);
const presigner = new S3RequestPresigner({
credentials,
AWS_REGION,
sha256: Hash.bind(null, 'sha256'),
});
const url = await presigner.presign(new HttpRequest({...s3ObjectUrl,
method: 'PUT'}), {expiresIn: Number(TEMP_PUT_URL_EXPIRY)});
const signedPutUrl = formatUrl(url);

8. Put object using the above url.

To put file into S3, use the belwo sample code

const url = // Obtain signedPutUrl using server API
this.http.put(url, file).pipe(
catchError((err) => throwError(err)));

9. Check the bucket for thumbnail

Now you could see org.jpg and w720.jpg in the corresponding subfolder of S3 bucket.

--

--