X
Business

Experimenting with Amazon EC2

How easy is it to build an app using Amazon's cloud computing environment? I gave it a go to find out.
Written by Colin Barker, Contributor

People often talk about the cloud as if it were a magical solution that fixes all kinds of IT problems in an instant, but behind the simple phrase "then we put it in the cloud" lies a host of complexity that requires careful planning.

I wanted to see how easy or not it was for a non-technical person like myself to follow the steps required to build a simple cloud application. As well I wanted insights into the world of Amazon's Web Services so I joined an Amazon seminar that would take me from the first steps for setting up applications through to running the apps, fine-tuning them and troubleshooting.

Getting started

Amazon's main cloud service is the Elastic Compute Cloud (EC2). For the demonstration we started by launching an EC2 "instance".

The instance is launched from the AWS console, which is the web-based user interface of AWS. There are many different types of instances that run on different operating systems, including Red Hat and SUSE Linux, as well as an Amazon Linux. You can also use Windows Server, SQL Server and others.

For our demonstration, AWS software architect Ian Meyers chose an Amazon Linux 64-bit instance with the aim of creating a very simple application.

Next we had to choose what type of instance we wanted to run. We had to decide whether it would be optimised for compute, which meant there is a higher ratio of virtualised compute units to memory, or optimised for memory-intensive applications. We also had to choose whether the instance would be storage-optimised and cluster optimised.

The choices made would be based on the user's knowledge and experience mixed in with some trial and error. That ability to try different scenarios is, of course, what makes these sessions so useful. Such is the competition these days between cloud companies that users can usually try these things out for free.

Next the system had to be configured for the number of instances it was to run. You can start with one machine and when that is working crank out multiple machines as demand increases. As we were to find, it is quite straightforward with AWS to start small and then add more power as required. If demand slackens the opposite - turning systems off - was just as easy.  

Next we configured the user data that is used for the start-up process (known as "bootstrapping"). This was so that when we launched a machine image we could customise it for our own use which meant we could include, say, our company software. For this, some company data was taken and cut and pasted into the application with the relevant scripting. In our demonstration that was as simple as taking the script and copying it into the relevant web server. As we were shown, that user data could be anything we wanted and in this case Meyers took a bash script ("bash" being a Linux scripting tool), which then installed HTTPD (the Apache Hypertext Transfer Protocol Server), PHP and MySQL. The system then started HTTPD and checked that it was running.

"That is as simple as just copying a script into the user data section [of the program]," said Meyers, "and then the machine comes up and runs". Once it is running the machine then puts into the metadata service which again, as Meyers explained, is designed to give the users all the information that is needed including the "context" of the machine, ie what region is it running in, what availability zone is it in, what is its IP address, and so on.

Adding components

Amazon EC2 instances have global disk drives, in our example Meyers used a Windows file system, but he could have used Linux or a different one. As Meyers pointed out, those file systems are destroyed on termination of the instance. So to stop that from happening, because you will presumably want your systems to last, you can add storage from whatever source you like including Amazon's own EBS (Elastic Block Store).

On Windows the storage shows up as a letter-assigned drive. If you use Linux you can map in whatever kind of device you want and RAID volumes as you want them.

You can tag instances with up to 10 user-defined pieces of information. A security group, a software-based firewall that sits on the instance, is there to ensure that only connections from specified ports, which originated from specified IP addresses, can get into the system. For the instance he was using, Meyers took a web server that could use SSH (Secure Shell) for access.

Loading up encryption

To keep systems secure Amazon environment use the key-pair method of encryption. As Meyers explained, if you wish to use encryption at this point the system would need to know of any key pairs that would be associated with the environment.

Amazon EC2 uses public–key cryptography to encrypt and decrypt login information. The public key you store with AWS is securely distributed onto instances that are launched. The user then keeps the private key to access the environment with it.

In the case of running Windows the user gets the password through the AWS console and is responsible for providing the private key. The key-pair method of encryption using public and private keys is a fairly standard basic, but reliable security protocol these days.

The stage is set – run the app

Next Meyers launched the app he had built and went to the Instances Overview to check its progress. At this point the user could review the status checks as the system started up.

Then it was all about checking DNS and looking at the addresses to make sure that everything was working properly. This was a straightforward matter of going through the HTTP addresses of the various devices attached to the system and checking they were all responding and working correctly.

Once the main systems were checked it was then a matter of checking through the other devices, such as drives, printers and so forth, that might be attached to the systems. Then the system collected information that it needed, which included the instance ID. The instance ID  is a unique identifier particular to each system component. The system also provided the availability zone that each particular machine was associated with.

As he went through the process, Meyers was able to highlight some of the issues that can be dealt with. To take one issue, at this time Meyers was downloading objects which downloaded very quickly, apart from one which was slow. As it happened the slow instance originated in Singapore and the distance meant the download was slower. To quicken it, Meyers pointed out, he might just change where the data came from to Dublin or somewhere even closer.

For distributing information there are two main ways: a download distribution or a streaming distribution. Carrying on with his example, Meyers set up domain names and the system automatically looked up S3 buckets and then he could associate them each with an origin ID.

Attending to the front end

Meyers now had the system at the stage where all his images were presented through CloudFront, Amazon's content delivery web service. Using this allows the system manager to choose either to go directly to a service - which might be directly attached - or go the cloud. In truth, these days people will use the cloud virtually all the time because of the reliability and flexibility it offers. However, a direct connection may occasionally be better if speed is an imperative or if the security of particular sites is an issue.

As Meyers showed in his example, CloudFront is a sophisticated piece of software that can be used to protect systems from denial-of-service attacks because it can minimise the number of connections allowed.

By this stage, Meyers had set up the entire infrastructure for the front-end of the system he was building but now he moved to hook-up a database so that real content could be used.  Here he brought up the Amazon Relational Database Service (RDS).

Meyers had a particular database in mind but he began by choosing the engine he wanted to run the database on. There was the choice of Oracle Express or Standard Edition, MySQL, SQL Server Standard or Enterprise Edition or the user could pick his own license model.

So having chosen MySQL, Meyers then chose the version and the instance class after which he had the option of turning on Multi-AZ deployment or not. With Multi-AZ a copy of the database is kept in an alternate availability site so that in the event of a core database failure the alternate can automatically swap in.

Also at this stage the user could set-up back-ups and also specify what the back-up retention period should be. And then the user could set up the back-up and maintenance window to ensure it met operational process requirements.

Now Meyers grabbed some content to use as an example and started that running. At this stage Meyers hit his first glitch of the demo when having set everything up, the application did not run. Of course such quirks only add to the atmosphere and the realism of live demos by introducing a note of tension but Meyers had it working again quickly.

Once it was working, Meyers was able to show the instance through the final part of the process as it worked dynamically. Now with RDS configured, he wanted to show how a web application could scale.

Meyers then stopped the server and brought it back up using an Elastic IP address. Elastic IP addresses are core parts of the Elastic Cloud. They are fixed, public space IP addresses that can be associated with multiple machines over time, Meyers explained, so you can just create one and then copy it as many times as you need.

So now Meyers showed how he could take an image from a machine and use it as a "freeze-dried picture of that system over time" which he could then launch new machines from.

In his example Meyers had what he referred to as an AMI or Amazon Managed Instance, which is a particular Linux or Windows instance that is supported and maintained by Amazon.  You can copy AMIs to different regions if you want to share information.

Auto-scaling for open flood gates

Imagine that you have just developed your dream application, games, performance tool or whatever and started selling it on the web. On Monday you had five sales, Tuesday 500, Wednesday 500,000 - how do you cope?

The ability to deal with suddenly ballooning workloads - as well as workloads that shrink just as quickly - is one of the big reasons why a customer would look at Amazon Web Services in the first place.

Amazon have nine main regions around the world and these are split into another 17 areas which compares well against any of the large companies in terms of worldwide coverage. Whether Amazon's coverage is more wide-ranging or "better" than the other big suppliers like IBM, Apple, Microsoft Azure, Oracle, Red Hat, etc, is open to question. What really matters to customers is do the companies have the centres where they need them.

So what is AWS like to use? The software appeared slick, as you would expect in a demo, but equally there were also a couple glitches where software did not immediately load, which some might have found concerning. Personally, I am reassured when demos do not run entirely smoothly as that convinces me that I am watching a "real" demo and not a flannel job.

The best thing to do is probably try it out for yourself. It may not be the first rule of systems management but it is a good one nonetheless - you can never have too many sticks to beat your supplier with.

Further reading

Editorial standards