What is AWS Lambda? Everything You Need to Know
A Complete Guide to AWS Lambda
What is AWS Lambda?
AWS Lambda is a serverless computing service provided by Amazon Web Services. It lets you run code in response to events such as changes to data in an Amazon S3 bucket or updates to a DynamoDB table. With Lambda you only pay for the compute time you consume - there’s no charge when your code isn’t running.
Key Features of AWS Lambda
No Server Management : AWS handles all the infrastructure for you.
Scalability : Lambda automatically scales your application by running code in response to each trigger.
Cost-Efficiency : Pay only for the compute time you use.
Integration : Easily integrates with other AWS services like S3, DynamoDB, Kinesis and more.
High Availability : Built-in fault tolerance.
Creating a Basic Lambda Function
Let's start by creating a basic Lambda function. This function will simply log a message to the console.
Step 1 : Create an IAM Role
First we need to create an IAM role that Lambda can assume to execute the function.
Log in to the AWS Management Console.
Navigate to the IAM Dashboard :
- Search for "IAM" in the AWS services search bar and select "IAM".
Create a New Role :
Click "Roles" in the left sidebar.
Click "Create role".
Choose a Role Type :
Select "AWS service".
Choose "Lambda".
Click "Next : Permissions".
Attach Permissions Policies :
Attach the policy "AWSLambdaBasicExecutionRole".
Click "Next : Tags".
Click "Next : Review".
Name and Create the Role :
Name your role (e.g. lambda_basic_execution_role ).
Click "Create role".
Step 2 : Create a Lambda Function
Navigate to the Lambda Dashboard :
- Search for "Lambda" in the AWS services search bar and select "Lambda".
Create a New Function :
Click "Create function".
Choose a Blueprint : Select "Author from scratch".
Function Name : Enter a name for your function (e.g. MyFirstLambdaFunction ).
Runtime : Choose the runtime (e.g. Python 3.8).
Role : Choose "Use an existing role".
Existing role : Select the role you created earlier (lambda_basic_execution_role).
Click "Create function".
Step 3 : Write Your Lambda Function Code
Function Code : In the "Function code" section enter the following code
import json def lambda_handler(event, context): # Log the event print("Received event: " + json.dumps(event, indent=2)) return { 'statusCode': 200, 'body': json.dumps('Hello from Lambda!') }
Save the Code : Click "Deploy" to save your function code.
Step 4 : Test Your Lambda Function
Configure Test Event :
Click "Test".
Event Name : Enter a name for the test event (e.g. testEvent ).
Event JSON : Use the default event JSON or modify it as needed.
Click "Create".
Run the Test :
- Click "Test" again to execute the function.
View Results :
Check the execution results in the "Execution result" section.
The log output should show the received event and the "Hello from Lambda!" message.
Integrating Lambda with Other AWS Services
AWS Lambda can be easily integrated with other AWS services to create powerful event-driven applications. Let's look at a couple of examples :- integrating Lambda with Amazon S3 and Amazon DynamoDB.
Example 1 : Integrating Lambda with Amazon S3
In this example we'll create a Lambda function that triggers whenever a new object is uploaded to an S3 bucket.
Step 1 : Create an S3 Bucket
Log in to the AWS Management Console.
Navigate to the S3 Dashboard.
Create a Bucket :
Click "Create bucket".
Bucket Name : Enter a unique name (e.g. my-lambda-trigger-bucket ).
Region : Choose a region.
Click "Create bucket".
Step 2 : Create a Lambda Function
Navigate to the Lambda Dashboard.
Create a New Function :
Click "Create function".
Function Name : Enter a name for your function (e.g. S3TriggerFunction ).
Runtime : Choose the runtime (e.g. Python 3.8).
Role : Choose "Use an existing role".
Existing role : Select the role you created earlier (lambda_basic_execution_role).
Click "Create function".
Step 3 : Write Your Lambda Function Code
- Function Code : In the "Function code" section enter the following code
import json
def lambda_handler(event, context):
# Log the event received from S3
print("Received event: " + json.dumps(event, indent=2))
# Get the bucket name and object key from the event
bucket = event['Records'][0]['s3']['bucket']['name']
key = event['Records'][0]['s3']['object']['key']
# Log the bucket name and object key
print(f"New object uploaded to {bucket}: {key}")
return {
'statusCode': 200,
'body': json.dumps(f"Processed file {key} from bucket {bucket}")
}
- Save the Code : Click "Deploy" to save your function code.
Step 4 : Configure the S3 Trigger
Navigate to the S3 Bucket :
- Go to the S3 dashboard and select the bucket you created (my-lambda-trigger-bucket) .
Set Up Event Notification :
Click on the "Properties" tab.
Scroll down to "Event notifications" and click "Create event notification".
Event name : Enter a name for the event (e.g. new-object-upload ).
Prefix and Suffix : Optionally specify a prefix or suffix if you only want to trigger on certain objects.
Events : Select "All object create events".
Send to : Choose "Lambda function".
Lambda function : Select the Lambda function you created (S3TriggerFunction) .
Click "Save changes".
Step 5 : Test the Integration
Upload a File to the S3 Bucket :
Click the "Upload" button in the S3 dashboard.
Add a file to upload.
Click "Upload".
Check Lambda Execution :
Navigate to the Lambda dashboard and select your function (S3TriggerFunction).
Click on the "Monitor" tab.
Check the "Log groups" in CloudWatch to see the logs generated by the function. You should see the log entries for the uploaded file.
Example 2 : Integrating Lambda with Amazon DynamoDB
In this example we'll create a Lambda function that triggers whenever a new item is added to a DynamoDB table.
Step 1 : Create a DynamoDB Table
Log in to the AWS Management Console.
Navigate to the DynamoDB Dashboard.
Create a Table :
Click "Create table".
Table name : Enter a name (e.g. MyLambdaTriggerTable ).
Partition key : Enter a primary key name and type (e.g. ID of type String).
Click "Create table".
Step 2 : Create a Lambda Function
Navigate to the Lambda Dashboard.
Create a New Function :
Click "Create function".
Function Name : Enter a name for your function (e.g. DynamoDBTriggerFunction ).
Runtime : Choose the runtime (e.g. Python 3.8).
Role : Choose "Use an existing role".
Existing role : Select the role you created earlier (lambda_basic_execution_role).
Click "Create function".
Step 3 : Write Your Lambda Function Code
Function Code : In the "Function code" section enter the following code
import json def lambda_handler(event, context): # Log the event received from DynamoDB print("Received event: " + json.dumps(event, indent=2)) for record in event['Records']: if record['eventName'] == 'INSERT': # Extract new item details new_item = record['dynamodb']['NewImage'] print(f"New item added: {json.dumps(new_item, indent=2)}") return { 'statusCode': 200, 'body': json.dumps('Processed DynamoDB Stream event') }
Save the Code : Click "Deploy" to save your function code.
Step 4 : Configure DynamoDB Stream
Enable DynamoDB Streams :
Navigate to the DynamoDB dashboard and select your table (MyLambdaTriggerTable) .
Click the "Manage Stream" button.
Select "New and old images".
Click "Enable".
Create a Trigger in Lambda :
Go to the Lambda dashboard and select your function (DynamoDBTriggerFunction).
Click "Add trigger".
Trigger configuration :
Trigger type : Select "DynamoDB".
DynamoDB table : Select your table (MyLambdaTriggerTable).
Batch size : Set to your desired value (default is 100).
Click "Add".
Step 5 : Test the Integration
Add a New Item to the DynamoDB Table :
Navigate to the DynamoDB dashboard and select your table.
Click the "Items" tab.
Click "Create item".
Enter the primary key value and any other attributes.
Click "Save".
Check Lambda Execution :
Navigate to the Lambda dashboard and select your function (DynamoDBTriggerFunction).
Click on the "Monitor" tab.
Check the "Log groups" in CloudWatch to see the logs generated by the function. You should see the log entries for the new item added to the table.
Best Practices for Using AWS Lambda
Optimize Function Performance
Minimize Function Duration : Ensure your functions complete as quickly as possible to reduce costs.
Manage Dependencies Efficiently : Use Lambda layers to manage common dependencies across functions.
Environment Variables : Use environment variables to manage configuration settings.
Secure Your Lambda Functions
Least Privilege Principle : Grant the minimum permissions necessary for your Lambda function to operate.
Encrypt Environment Variables : Use AWS KMS to encrypt environment variables.
Monitor and Log : Enable detailed monitoring and use CloudWatch logs to monitor function execution and performance.
Handle Errors and Retries
Error Handling : Implement robust error handling within your Lambda functions.
Retries : Configure retries and dead-letter queues to handle failed executions.
Conclusion
AWS Lambda is a powerful tool for building serverless applications. It eliminates the need for server management, scales automatically and integrates seamlessly with other AWS services. In this blog post we covered the basics of AWS Lambda, including creating a basic Lambda function and integrating it with Amazon S3 and DynamoDB. We also discussed best practices for optimizing performance, securing functions and handling errors.
By leveraging AWS Lambda you can focus on writing code and building applications without worrying about infrastructure management. Stay tuned for more insights and best practices in our upcoming blog posts.
Thanks for reading. !!
Connect and Follow:
Like👍 | Share📲 | Comment💭