Aws
Mindset
Python
11 Jun 24

The Best 5 AWS Cloud Projects To Get You Hired (For Beginners)

If you want to get your first job in the cloud industry it’s extremely important that you complete high quality projects using the technologies that employers are looking for.

Today, I’m going to walk you through 5 high quality projects for beginners. Completing these projects will help put you in the top 1% of applicants applying for cloud jobs.

The technologies that these projects will cover are AWS, Terraform, CICD, Docker, and Python. By the time you’ve completed the projects, you will have gained hands-on experience using these five important technologies. Next steps will be to update and improve your resume to highlight these new skills.

Project 1: AutoScaling Group Project

The first project you should work on is The Autoscaling Project.

One of the key benefits of the cloud is that infrastructure can be scaled quicker and easier than on-premises infrastructure.

This means that your ability to demonstrate your cloud scaling abilities is valuable to employers.

Here’s your first project scenario:  A company has an EC2 instance hosting a web server.

Everything is pretty stable until the marketing department starts running promotions that prove to be a hit with customers.

This causes the website to receive spikes in traffic at different times. Everytime there is a spike in traffic the server gets overloaded until it fails which causes the site to go down. This then leads to customers being disgruntled because they can’t access the site, not ideal.

Your tech lead considers some scaling options, including Vertical Scaling, which is where the instance resources are increased. For example if the original instance has 2vcpu and 5 Gig of RAM, then vertical scaling would be to increase the resources of the instance to 6vcpu and 10 Gig of Ram.

But after digging deeper into the data they realised that the spike only occurred once or twice per week and the cost of having a larger instance outweighed the benefits.

After some extra consideration, your tech lead decides on Horizontal scaling. This is where more instances are added to meet the increased demand.

This means rather than having one big instance, you can have 2 or 3 smaller instances that can be scaled up or down depending on traffic levels through autoscaling.

You have now been set the task of setting up the autoscaling so that this solution happens automatically.

Your goal is to create a web server that can scale up and down to meet website traffic demands. For example, if your website is getting a lot of traffic then it adds more EC2 instances to deal with the increased load, and when the traffic goes down, it reduces the number of EC2 instances to save on costs.

As you can see from the diagram there are multiple resources you need to create to make this work.

Let’s walk through the steps of creating this:

Step 1 involves creating 3 public subnets and 3 private subnets. Here, is where all the resources are created.

Step 2 involves writing an EC2 UserData script which means that when a new EC2 instance is deployed it automatically configures the instance to create the webservice using Apache or Nginx.

Step 3 involves creating an autoscaling group and configuring it.

Step 4 requires you to create an Application load balancer in the public subnet and connect it to the Autoscaling group configurations.

In step 5 you create auto scaling policies that trigger the scaling activities.

This can be achieved in one of two ways…

Firstly,  create a cloudwatch alarm that monitors the CPU utilisation of the EC2 instances. If the average CPU utilisation goes above 70% then the Autoscaling group will add an new EC2 instance to deal with the increased workload.

Create a second cloudwatch alarm that also monitors the CPU utilisation of the EC2 instances. This time it is triggered if the average CPU utilisation goes below 20%, this shows that there is less traffic going to the instance which means that any excess EC2 instances can be terminated.

The second way to trigger scaling is with a method called Target Tracking. This is where you set a policy that you always want the average CPU utilisation to be something like 30%,this means that the instances get added or taken away automatically so that the average is always 30% no matter how much or how little traffic comes into the service.

This project demonstrates a lot of the skills that employers are looking for. Talking about this project on your resume or in interviews will help highlight your understanding of how scalability works in the cloud.

You should also highlight the fact that this implementation includes security because you are putting the web servers in private subnets rather than public subnets.

It’s important to emphasise your understanding of cost saving because not only are you scaling up to meet increased traffic demands, but you also have triggers that reduce the number of EC2 instances when there is less traffic to save money.

Project 2: CICD Project

The next project we are going to talk about is a CICD project.

CICD stands for Continuous Integration and Continuous Deployment and it is an important part of the application development process.

It is essential to automating the process of building, testing and deploying code.

In this scenario a company has been making code changes by S.S.H.ing on the server and manually updating the code base on the server. This has led to a lot of issues with untested code being uploaded directly to production which has led to errors, application crashes and website failures.

To fix these issues they have decided to implement a CICD pipeline to automate the deployment of code to stop developers from manually uploading code to the server.

Your task is to create a CICD pipeline that takes your code from your local computer and automatically deploys it on an EC2 instance in AWS.

Here's how we make it happen:

Step 1 involves creating a remote code repository, in our example we are going to use Github, but you can also use other options like gitlab or bitbucket.

Step 2 involves setting up AWS CodePipeline which is the main pipeline that acts as the backbone of all our operations.

Step 3 involves setting up AWS CodeDeploy which is our tool to deploy the code to the EC2 instance.

Step 4 involves configuring all these services to connect to each other to complete the pipeline.

The test that this works is that we can upload code from our local computer to a remote repository, Github in our case.

Once the code in Github is complete, it should automatically trigger CodePipeline to take that code and deploy it on our EC2 instance using CodeDeploy.

This CICD project will impress hiring managers for 3 reasons:

The first is that a pipeline like this will enable the organisation to create faster application deployments. This means they can release new features and implement bug fixes quicker which allows them to stay ahead of the competition.

Another benefit of CICD is that it helps improve the quality of code because you can implement automated testing throughout the pipeline which can catch errors earlier in the development process and prevents then from reaching production and causing issues.

The final benefit of implementing CICD is that it encourages a more collaborative development process between teams as code changes can be reviewed by multiple members of the team before deployment occurs.

I have reviewed hundreds of resumes from cloud beginners and not a lot of them have good CICD projects and so if you can do this you will seriously stand out from the competition

Project 3: Instance auto turn off

The third project we are going to work on is a serverless system to automatically turn EC2 instances on and off to save money.

This is probably one of the most advanced projects for beginners because it involves serverless. The skills you are going to use here are your programming skills using Python and Serverless technologies using AWS Lambda.

Here’s the scenario:

A company has a development EC2 web server that is only used during office hours which is 9am to 5pm, Monday to Friday.

The company is looking for ways to reduce their cloud costs and they’ve realised that by turning off the EC2 instance when it’s not in use, they will be able to save up to 60% of their costs.

They want you to figure out an automated way to stop the instance at the end of the work day at 5pm and start it at the beginning of the work day at 9am.

Now that we understand the problem let’s discuss the solution:

The first step is to create two lambda functions, the first lambda function will be written in Python and you will write a script to turn the EC2 instance off using Boto3 which is the AWS SDK for Python. We can call this function the Stop Lambda.

Once you have tested this, you then create the second lambda with exactly the same settings, except this time you script it to turn the instance on. Let’s call this the Start Lambda.

Now that the lambdas are ready, you can create two cloudwatch event triggers using Amazon Eventbridge.

A quick note about Eventbridge…

Eventbridge is an AWS service that allows you to create, run, and manage scheduled tasks at scale. It can trigger AWS Lambda Functions through a Scheduled Trigger. This is where you configure a time schedule for the Lambda to be triggered. For example you could configure a Lambda function to be triggered everyday at 6pm. This is called a scheduled event.

Now that we know about Eventbridge, we can configure two event triggers and connect them to their respective lambdas, one event trigger for 9am everyday monday to friday that is connected to the Start Lambda.

The second event trigger that is scheduled for 5pm everyday Monday to Friday can be connected to the stop Lambda.

The output of this is that every weekday morning the EC2 instance should automatically start up and at the end of the day at 5pm it should automatically turn off, saving the company a lot of money.

As you can see this project not only shows your programming skills, but also your understanding of serverless technology. But most importantly, it shows your understanding of cloud cost saving strategies which is a skill employers are always looking for.

Project 4: Fargate web server

The next project I want you to complete is one that involves Docker and Container technology.

Here’s the scenario:

Your company wants to spin up a blog and they decide that wordpress is the best way to go.

Traditionally, Wordpress would have been deployed on an EC2 instance but they have decided that they want to use this new technology they’ve been hearing about called Docker and if possible they would like the solution to be serverless.

To achieve this goal you turn to Amazon ECS Fargate.

Fargate is an Amazon technology that allows you to run containers serverlessly, which means there is no need to configure and manage servers or EC2 instances.

One of the benefits of Fargate is that you save on operations overhead as you don’t need to manage, secure or scale any servers. All you need to manage is your containerised applications and this provides a huge time saving for developers.

Here are the steps to making this project happen:

Step 1 involves creating a V.P.C with three public subnets and 3 private subnets.

Step 2 involves creating an Application load balancer in the public subnet, this will route traffic to the containers once they are ready.

Step 3 involves creating an RDS in a private subnet, this will act as the database layer for our application.

Step 4 involves creating an ECS cluster and configure the service and Task definition with the container image and other details.

Step 5 involves giving them permission to communicate with each other using security groups.

This is an excellent project to demonstrate your expertise with Docker and ECS Fargate and completing this put you in a better position to get that job offer because this is the sort of experiences employers are looking for.

Project 5: Terraform Autoscaling

The final project we are going to talk about today is the Terraform Project.

For those who don’t know, Terraform is what is known as an Infrastructure As Code Tool.

Before we proceed, it’s important to understand what infrastructure as code or I.A.C is and why it’s important.

Traditionally when you create cloud resources, you usually do it by clicking about in the console. However there is a better way to achieve the same outcome.

Rather than doing it manually by hand, you can now create your infrastructure using scripts and code files, hence the name infrastructure as code.

One of the benefits of I.A.C over manual deployment is that it allows your infrastructure to be scalable.

Here’s an example:

Let’s assume you manage the AWS infrastructure of a growing Saas company which has deployed dozens of EC2 instances and RDS databases in the North Virginia region.

They decide to expand to Europe and in order to reduce latency to improve the user experience they want to duplicate all their North Virginia resources in the Ireland region.

If all the original resources were created using the console then it would take a long time to identify all the resources that need to be moved and then manually recreate those resources.

By doing it this way it will not only take a lot of time to recreate the right resources, but there is also the risk of human error because maybe an EC2 instance or a database gets missed out in the process.

If however all the infrastructure had been scripted using a tool like Terraform then duplicating all the resources in the new region would be a lot easier because all that would be required is for a few parameters to be updated and for the code to be redeployed in the new region.

In summary, making a change like recreating all the cloud resources in a new region using the manual method would be time consuming and prone to error, but by creating all the resources using infrastructure as code, recreating them in a new region can be done quickly and with minimal risk of errors.

Now that you understand why Terraform is important, here’s the project I’d like you to complete.

This project is actually the same as the first project we talked about which is the Autoscaling project, except this time, rather than creating all those resources using the AWS console, I want you to create them all using Terraform.

If you can do this then you will demonstrate to cloud employers that you have the automation skills that they are looking for. This will help you stand out and become more employable.

Now, earlier I promised to show you where you can go to see the comprehensive video solutions to all these projects…

If you want the solutions, all you need to do is sign up to our cloud career acceleration program at cloudcareermentor.com.

This program will not only help you improve your technical skills and confidence through high quality projects, it will also show you how to write your resume in a way that attracts the attention of recruiters and will help you prepare for interviews in a way that impresses hiring managers.

Don’t just take it from me, one of our students was able to get their first high paying cloud job within 6 months of going through the program