Part 1 described the reasoning and setup behind the comparison. In this post I will go through the implementation of AWS and the final comparison of IaaS Vs. VPS
Setting up an EC2 and RDS instance was extremely simple. After signing up to the service and entering the payment details it was a case of going to the AWS console. The console is the home page for all the AWS services.
Just like Linode VPS all management is done online but the difference is AWS are split by region. Each reflects an area around the world which is roughly split by continent. Regions are important because they affect latency depending on target audience is from and also they have difference pricing and subject to different taxes. This is also good because it allows a federated architecture and load balanced by region. Each region have sub regions so servers can be backed up in different areas of region should one sub region go down.
Creating EC2 instances are very simple using the wizard. It’s a guide deployment with selection for instance type (more like size), sub regions, and Amazon Machine Images (AMI). The images are base snapshots of a VM designed by Amazon, distributions like Ubuntu and Windows to community images. It is possible to start from scratch create an AMI out of it as well.
Once configured and the instance has started it is a case of ensuring the security policies are set up correctly and remoting onto the machine. The defaults usually include ports for Windows RDP and SSH. Amazon uses internal and external for reference. If an IP is required this can be purchased and assigned for an additional cost. This is a nice layer of security as it allows the EC2 instance to have ports open but using the security policy to block external connections therefore only allowing AWS servers access to open ports. Whilst security policies can be applied to multiple EC2 instances, once the instance has booted up it is stuck with that policy for the life of the instance. This is a real let down and to allow flexibility it’s probably best to create a separate policy for each instance in a changing environment.
Creating a RDS instance was more or less the same as creating an EC2 instance except it takes longer. The wizard walks through the database type like MySQL, Oracle and MS SQL. RDS also has some of the same features of EC2 like security group policies and snapshots. It too uses DNS names and IP address can be had and assigned for a fee.
In the security you can allow instances access to other instances. These are called zones. Once the EC2 and RDS instances have been created they both need to be configured to allow each other through the firewall.
Next step is to setup EC2 which includes remoting to the instance and configuring with the correct applications. The only caveat to this is to either allow all connections through the security policy or adding a specific IP to allow the remote access to take place. This also applies to RDS. For EC2 I installed Ubuntu and installed the LAMP stack minus the DB. On RDS I connected and restored a backup of a database.
Once everything was setup and installed it was time to test the site was up and running. On first try everything was. The only issue I have run into was SSL certificates which I did not want to transfer onto this experimental setup. This meant I had to either disable SSL support in WordPress or not access the admin part of the site. I chose the latter for now.
So far so good. There are more AWS setup required than Linode but the software side is more or less exactly the same as a VPS. At the time of starting my billing was reset to £0 but when I enabled S3 I had buckets on there when I used JungleDisk years ago. I thought once I closed my account down it would be wiped but it appears to have kept them and it wasn’t till I noticed the daily increasing cost of S3 that I realised what had happened. This will skew the first month of billing.