Cloud computing with Amazon's Elastic Compute Cloud
Preparing and Uploading
Amazon provides two sets of tools. The first bundle of software you need is the AMI Tools package , which contains the tools for creating AMIs and uploading them to Amazon. The second is the EC2 command-line tools bundle , which performs more generic tasks, such as creating and controlling EC2 instances. To start, download both files and extract them into a directory. Although you can install these in a system directory (/usr/local for example), for this example, install in the home directory. With the files in place, set some environment variables. The EC2 software requires a couple of custom variables:
export EC2_HOME=~/ec2-api-tools/ export EC2_AMITOOL_HOME=~/ec2-ami-tools/
For more information on these variables, see the Readme file ec2-ami-tools/readme-install.txt.
Now, make sure JAVA_HOME is set, and add the EC2 directories to the PATH variable.
export JAVA_HOME=/usr/lib/jvm/cacao/jre/ export PATH=$PATH:ec2-api-tools/bin:ec2-ami-tools/bin
To check that everything is working, enter:
To use your Linux image, you need to bundle it, upload it to EC2, and register it. To bundle the image, use the ec2-bundle-image tool, which is provided by AMI tools:
ec2-bundle-image -i myimage.fs --cert ec2-keys/cert-XXX.pem --privatekey ec2-keys/pk-XXX.pem -u 1234-2345-1234
This takes your Linux image, splits it into several files, and creates a manifest file, which tells EC2 where your image is hosted in Amazon Simple Storage Services (S3) and how to use it. The split image files are created in /tmp/ by default – have a look once the ec2-bundle-image process is complete.
Next, upload the image with the ec2-upload-bundle tool, which takes all the files you just created on your local machine and uploads them to S3:
ec2-upload-bundle -b my-image \ -m /tmp/myimage.fs.manifest.xml -a access-key-here -s secret-key-here --ec2cert ~/test1518.pem
This might take some time, so make sure your terminal won't timeout while you're waiting (e.g., use screen). After the upload has completed, look in your S3 account and notice that the bucket named my-image contains the files that you created with ec2-bundle-image.
Your Linux image is now sitting on S3 with a manifest file.
Registering and using AMI
The last step is to register and use the Linux image:
ec2-register my-ubuntu-df/myimage.fs.manifest.xml -K ~/.ec2/pk-XXXX.pem -C ~/.ec2/cert-XXXX.pem
Note that ec2-register refers to the manifest file on S3, not on your local machine – hence, the path my-ubuntu-df/myimage.fs.manifest.xml. Also, you can register through ElasticFox by clicking the green plus icon in the AMI listing and entering the path to the manifest file.
To use the image, fire up ElasticFox, refresh the list of AMIs, and find your new AMI using the filter box to the top right of the AMI list. Create a new instance of the AMI, and there you have it: You're running your own Linux image on EC2 for US$ 0.10 an hour.
Once the instance is running, ssh onto it and play around. Very quickly you'll decide what software and content files you want on all your EC2 instances, and you can then push the files and programs into your AMI using the steps I took you through above.
If you think your image is really good, you can share it for free or charge others for the use of it through Amazon.
Playing in the Clouds
Like any new technology, cloud computing is fun to play with, but you'll like it even better if you can get some really good use out of it.
So, what is EC2 good for?
Cloud computing makes it easier to throw vast amounts of hardware at a problem without having to worry about the details of hosting, networking connectivity, cooling, or the boredom of 100 hosting contracts. This makes EC2 great for anything that requires lots of servers – processing millions of images, searching and cataloging tasks, and so on. Anything that can be done quicker by throwing more computing power at it can use EC2.
And because you can requisition servers on the fly, cloud computing is good for time-sensitive tasks, such as sending hundreds of items of email over lunch or preparing lots of video files while the user waits. Scaling on the fly means you don't have dozens of servers sitting around doing nothing (and costing you money).
The cloud is also suited to any service that might need to scale, but you don't know the number of end users – for example, social networks, intranets, extranets, or online applications. Also, you can use EC2 to test new server configurations, and you can use the cloud to test applications .
Cloud computing is set to change the way applications are built and deployed. Anything that is impossible now because you can't afford the servers becomes wonderfully possible – or at least much cheaper. Creating custom AMIs will allow you to get the most out of the service by launching EC2 instances fine-tuned for your particular applications. Building and uploading images can take time, but once you have them, it is easy to tweak the images to contain exactly what your applications need and no more.
And once you can create 1,000 copies of your application, you can stop worrying about those server loads.
- Creating an AWS account: https://aws-portal.amazon.com/gp/aws/developer/registration/index.html
- EC2 homepage: http://www.amazon.com/ec2/
- Amazon web services: http://developer.amazonwebservices.com/connect/entry.jspa?entryID=609
- EC2 AMI tools: http://developer.amazonwebservices.com/connect/entry.jspa?externalID=368&categoryID=88
- Amazon command-line tools: http://developer.amazonwebservices.com/connect/entry.jspa?externalID=351&categoryID=88
- Selenium: http://selenium-grid.openqa.org/
Buy this article as PDF
Weird data transfer technique avoids all standard security measures.
FIDO alliance declares the beginning of the end for old-style login authentication.
Legendary Uber-distro splits over the systemd controversy.
One of CeBIT’s most successful forums returns in 2015.
A new study says it is possible to unmask 81% of TOR users.
Redmond joins the revolution by turning the .NET Core Runtime into a GitHub project.
Users only had 7 hours to update before the intrusions started.
It's official: The new web arrives