Kyo Lee

Open-Source Cloud Blog

Tag: develop

A Developer’s Story on How Eucalyptus Saved the Day

This is a short story on how a UI developer at Eucalyptus was able to use Eucalyptus to save his time on development of Eucalyptus.

travis-mascot-200px

The agenda of the day is to set up Travis CI for the latest Eucalyptus user console, Koala. Quoted from Wikipedia, “Travis CI is a hosted, distributed continuous integration service used to build and test projects hosted at GitHub.” In other words, we want to set up an automated service hook on Koala’s GitHub repository so that whenever developers commit new code, “auto-magic” takes places somewhere in the Internet, which ensures that the developers did not screw things up by mistake, which, in turn, allows us developers to sit back and enjoy a warm cup of “post-commit-victory” tea while our thoughts are drifting away on the sea of Reddit.

But, before arriving at such Utopia, first things first. Must read the instructions on Travis CI.

Luckily, Travis CI put together a nice and comforting set of documentations on how to hook a project on Travis CI (http://about.travis-ci.org/docs/user/getting-started/) as well as its impressive, pain-free registration interface. Things are as easy as clicking buttons for the first few steps so far.

And, of course, nothing ever comes that easy. Now I am looking at the part where I need to create the XML configuration file for Koala’s build procedures. Done with the button-clicking. Time to put down the cup of tea because now I got some reading and thinking to do.

A few minutes after, I am stunned by the line below:

“Travis CI virtual machines are based on Ubuntu 12.04 LTS Server Edition 64 bit.”

Oh, bummer.

The main development platform for Koala has been on CentOS 6, meaning Koala’s build dependencies and scripts have been targeted toward running on CentOS 6 environment. It means that I need to go over the build procedures and dependency setting so that Koala can be built and tested on Ubuntu 12.04. But, first, where do I find those Ubuntu machines?

Then, the realization, ‘Wait a second here. I have Eucalyptus.’

I open up a browser and log into the Eucalyptus system for which I have been using as backend to develop Koala. I launch a couple of Ubuntu 12.04 instances. Within a minute, I have 2 fresh instances of Ubuntu 12.04 virtual machines up and running.

Immediately I log in to the first instance and start installing Koala to validate the build procedures on Ubuntu 12.04 environment. Along the way, I discover various little issues in this new environment and tweak things around to fine-tune Koala’s build procedures. Once felt ready, I log into the other Ubuntu instance to verify the newly adjusted build procedures under its fresh setting. More mistakes and issues are captured, and more adjustments are made. Meanwhile, the first instance has been shut down and a new Ubuntu instance has been brought up. With this new instance, I am able to rinse and repeat the validation of the build procedures. Of course, there are some mistakes again. They get fixed and adjusted. Meanwhile, another instance goes down and comes up fresh.

The juggling of the instances lasts a couple more times until the build procedures are perfected. Now I am confident that Koala will build successfully on Ubuntu 12.04 environment. Commit the new build XML script to GitHub. It’s time for the warm cup of “post-commit-victory” tea.

images-tea

Beyond Continuous Integration: Locking Steps with Dev, QA, and Release

Continuous integration: the practice of frequently integrating one’s new or changed code with the existing code repository [wikipedia]

In this blog we will talk about how the continuous integration process was put in place for the new component, Eucalyptus User Console, in order to collaborate the efforts among the dev, QA and release teams throughout the development cycle of Eucalyptus 3.2.

Backgrounduserconsoleconponentview

Eucalyptus User Console is a newly introduced component in Eucalyptus, whose main goal is to provide an easy-to-use, intuitive browser-based interface to the cloud users, thus assisting in the dev/test cloud deployments among IT organizations and enterprises. Eucalyptus User Console consists of two components: javascript-based client-side application and Tornado-based user console proxy server.

Early Involvement

The first phase of the development was to come up with a quick prototype to demonstrate how the user console would work under the given initial design of the architecture (see the Eucalyptus Console components layout diagram above). As soon as the prototype was evaluated and its feasibility was verified, the release team started creating the packages for two major Linux OS platforms: Ubuntu and Centos/RHEL.

The early involvement of the release team turned out to be the best help any developers or QA engineers could ask for; since the very beginning stage of the development, the release team was able to provide invaluable information that served as guardrail for the fast-moving development. Such information included advising on how the files should be named and organized and identifying which dependencies should or should not be used in order to meet the requirements for various Linux distributions. Dealing with such issues at the later stage of the development would have been undoubtedly a major pain in the back-end.

jenkins_logo

Further more, the release team was able to ensure that the development of the new user console would never go off the track against the Linux distro requirements by setting up the automated daily package-building process using Jenkins — which utilizes the VM resources from our Release cloud that runs on Eucalyptus.

Keeping Up With Eucalyptus

Setting up the automated process to build the packages would allow the release team to keep an eye on the progress of the user console’s development in terms of the ability to build the packages according to the constraints set by the Linux distributions. However, it would not guarantee whether the newly built packages contain the version of the user console that works with the current, up-to-date Eucalyptus cloud that was also in development.

Thus, the challenge was to ensure that the latest built user console packages work with the latest built Eucalyptus throughout the development.

In order to solve this issue, the QA team created a testunit that automatically installs the latest user console packages on a newly built Eucalyptus. Then, the testunit was added to the main test sequences used by the Eucalyptus 3.2 development in our automated QA system, making the installation of the latest user console packages accessible by all developers at Eucalyptus.

This setup encouraged a failure in the user console package installation to be seen by any developers throughout the development, thus allowing the failure to be detected fast and reported with quickness.

Screen shot 2012-12-10 at 5.50.02 AM

The testunit ui_setup can be seen in action above in the table which displays the results of the test sequence ran by the automated QA system.. Check out the link below for more details of this testunit:

https://github.com/eucalyptus-qa/ui_setup

Circle of Trust

As the user console evolved out of its prototype state and took the form of a more product-like shape, the QA team was working in parallel, figuring out how to set up the automated testing process for the user console. The blog here talks in detail about how Selenium was used to create the automated web-browser testing tools, se34euca.

big-logo

In the mid-stage of the development, as the features of the user console started functioning in reasonably stable manners, 3 automated tests were added — incrementally — to ensure that the working state of the user console throughout the development.

Screen shot 2012-12-10 at 6.41.28 AMThose 3 tests are:

  1. user_console_view_page_testhttps://github.com/eucalyptus-qa/user_console_view_page_test
  2. user_console_generate_keypair_testhttps://github.com/eucalyptus-qa/user_console_generate_keypair_test
  3. user_console_launch_instance_test https://github.com/eucalyptus-qa/user_console_launch_instance_test

These automated tests were to ask the 3 simple questions below on a daily basis:

  1. Can the user log in and see all the landing pages on the latest user console?
  2. Can the user generate a new keypair using the latest user console?
  3. Can the user launch a VM instance using the latest user console?

Of course, it would be possible, and desirable, to ask more questions in a more complicated fashion. However, during the rapid development phase, asking those 3 simple questions on a daily basis, turned out to be sufficient, and effective, to understand whether something terrible had happened to the user console or not.

traffic_light

The goal of these automated tests at this stage of the development was not to detect every little defect in the product. Not too soon at the moment.

The main purpose is rather to serve as an indicator for the developers, QA engineers, and release engineers to assure ourselves that the change that went in the code earlier today did not ruin the delicate trust among the three groups, meaning that the build, installation, and configuration procedures are still in tact. Having such assurance in check by mechanical means has made the three groups extremely effective in discovering issues during the development since it allowed each member to narrow down exactly what was responsible for the defects in a finely reduced time frame, which was in hours, rather than days or weeks.

Guardrail For Development

Having the automated package build process and the automated installation/configuration process in place at the early stage of the development was proven to be extremely useful; rather than agreeing on the written procedures, the dev, QA, and release team materialized such agreements into the actual implementation, and put them into work by using various automated mechanics that run on a daily basis. Therefore, throughout the development, we were able to witness and assure ourselves that we were making progress in accordance with the plan and our self-imposed restrictions.

Check out the Eucalyptus Open QA webpage to see the continuous integration at Eucalyptus in action:

Eucalyptus Open QA (beta) – http://ec2-50-112-61-121.us-west-2.compute.amazonaws.com/open_qa.php

A Developer Walks through Cloud

Skip Directly to [Instruction on How to Run the Video Processing Prototype]

A Developer Walks through Cloud

IMG_1562

1. Little Phone, Big Cloud

A few months ago, a phrase caught my attention: “Instagram for Video”. It was an interesting idea for a mobile application. As a software designer, I dug into the idea, soon to realize one major implementation challenge.

IMG_1564

It turns out that video is a collection of pictures–many, many pictures. Given the standard 24 frames-per-second rate, even an one-minute-long video would be comprised of 1440 pictures, which meant image-processing of 1440 pictures on a mobile phone. That is a lot of pictures for a small battery in your mobile phone to handle.

IMG_1567

There is an alternative way to the scenario; let’s consider moving the image-processing task over to a remote machine that is bigger, stronger, and meaner. In this scenario, the mobile phone could upload the video to a server via the internet, process it remotely, and retrieve the processed video back in a seamless fashion.

IMG_1568

However, there is one absolutely-crucial requirement in this scenario; we are going to need a big, big, big machine–big enough to handle millions of requests once this killer application goes viral (go big or go home). There is only one answer to this type of demand: “the Cloud.”

Luckily, there is an open-source cloud available; Eucalyptus is an open-source Infrastructure-as-a-Service cloud platform whose APIs are compatible with the ones with Amazon’s EC2. This makes Eucalyptus an ideal in-house cloud application development platform. It guarantees that once my killer application runs on Eucalyptus, it will also run on EC2 with no modifications required, thus creating a truly portable cloud application with the world-wide deployability.
IMG_1570

2. IaaS Cloud

For those who are not familiar with the IaaS clouds, let me to take you to a quick walkthrough to the cloud.

Eucalyptus and Amazon’s EC2 offer “Infrastructure-as-a-Service” cloud platforms. It means that a cloud-user can request, “Hey cloud, I need 5 machines with full network connectivity and access to the storage,” then within minutes, the user will have the complete system ready for use.

IMG_1572

Take this concept little further; instead of requesting machines for generic purposes, the cloud-user could have specified which machines to serve as what purposes at the creation. For instance, using the example in this article, the cloud-user could have asked, “Hey cloud, I want one machine to work as a collector while the rest as image-processors, and have them process my cat video immediately!” Then, the cloud would have brought up a network of machines with the specific tasks assigned to each machine, and they would have worked on processing of the cat video right away. Once the processing was complete, the machines would have been self-terminated, leaving only the processed cat video behind.

IMG_1575

3. App on the Cloud

Let’s go back to the video-processing application on the cloud. Here I will cover some major design considerations when developing applications on the cloud.

3.1. Parallelism and Elasticity

Designing an application on a distributed system requires a process to be broken down into small tasks. Then, one must identify the tasks that can bring parallelism into the process. In this video-processing application, the process can be broken down into 3 major steps: decoding the video into images, processing the images, and encoding the processed images back to a video. Given these breakdowns, the natural approach is to distribute the image-processing task over multiple machines and assign a single machine to perform the encoding and decoding.

IMG_1577

One important characteristic of the cloud that you must realize at the core of the design is the elasticity of the cloud. The elasticity is what differentiates the cloud applications from the traditional distributed applications. Traditionally, in a distributed computing environment, the number of nodes N in the system is a static value that is unchangeable during a job. However, in the cloud environment, there is no bound in the number N, theoretically the number N is limitless. This means that at any given point during the job, the system should expect the number N to grow, or even shrink in some cases. For instance, in our video-processing application, we could initially start with 5 machines assigned to be image-processing nodes, however in the middle of the processing, we should be able to add 5 more nodes to boost the productivity. Taking such advantage of the elasticity must be considered at the design level of the application.

IMG_1579

3.2. Prototype

Following is the overview of the prototype of the video-processing application in the cloud.

For more detailed instructions, please go to the page [Instruction on How to Run the Video Processing Prototype]

The goal of the prototype is to demonstrate a cloud application that performs image-processing tasks in a distributed fashion. The application takes input of a video file, performs image-processing in parallel, and when it terminates, the processed video file is stored in a known storage location provided.

For the simplicity of the prototype, let’s assume that there is a machine that works as a file server that has an apache web-service running in the open, which is accessible from the cloud. In other words, any virtual instances(nodes) spawned on the cloud will have access to the files on the file server via download(wget). Given this setup, for instance, when we trigger the collector node, it can download the input video file from the file server to start the process.

IMG_1581

For the prototype, we need to construct two types of nodes: collector node and image-process node. However, before I go into further details, I must explain what takes places when the cloud-user requests an instance from the IasS cloud.

When the cloud-user asks the cloud, “Hey cloud, I need one machine,” the user is required to specify the image of the machine. In other words, the cloud-user must request, “Hey cloud, I need one machine with the RHEL 6.1 image that I have prepared for this video-processing prototype.” Then, the cloud will bring up a virtual instance that is flashed with the specified RHEL 6.1 image. Since users can prepare and upload images of their choices to the cloud, the possibilities are limitless on what you want the instances to do or to become.

IMG_1583

For this particular prototype, I prepared a single image that would be used by all nodes. I took a generic Ubuntu Karmic image as the base image and modified its ‘rc.local’ script, which is the default script that gets executed automatically when the image boots up. The modified ‘rc.local’ script is set to read a line from the ‘user-data’ field, which get passed to the instance from the cloud-user at the creation. This small modification allows me to control the rolls of the instances with having only one image. For example, I can request, “Hey cloud, I want one instance with my special Ubuntu image and have it run the script ‘collector.pl'”, then later, I can ask, “Hey cloud, I want another instance with the same image, but this one will run the script ‘processor.pl’.”

The requests in the example above would look like the below. Notice using the same image ID ’emi-9BD01749′, but different ‘user-data’ values (-d).

First request to bring up a collector:

euca-run-instances emi-9BD01749 -k mykey0 -n 1 -g group0 -t c1.medium -d “collector.pl”

Second request to bring up a processor:

euca-run-instances emi-9BD01749 -k mykey0 -n 1 -g group0 -t c1.medium -d “processor.pl”

IMG_1585

In the prototype, the actual requests contain more information than just a script name. The first request looks like,

euca-run-instances emi-9BD01749 -k mykey0 -n 1 -g group0 -t c1.medium -d “collector.pl 192.168.7.77 [lovemycat.avi]”

This command translates to, after the instance boots up, it downloads the specified script ‘collector.pl’ from the file server at ‘192.168.7.77’ via wget and execute the script. The purpose of the script ‘collector.pl’ is to turn the instance into the collector node for the video-processing application. First, the script installs all the necessary softwares via apt-get commands in Ubuntu; it uses various open-source softwares for the encoding and decoding tasks. It also installs the NFS server to create a shared directory where the processing nodes can access. Second, it downloads the target video file ‘lovemycat.avi’ from the file server at ‘192.168.7.77’ (for the convenience of the prototype, the file server is designed to provide all the external file resources to the instances). Then, the collector node decodes the avi file into a collection of JPEG images. These image files are stored in the shared directory opened up by the NFS server. Now, the collector node waits for the image files to be processed by the processing nodes. The collector node’s job is to periodically scan the shared directory for the progress.

IMG_1587

After the collector node enters the stage where it idles and scans, the next step is to start a group of the processing nodes by requesting,

euca-run-instances emi-9BD01749 -k mykey0 -n 3 -g group0 -t c1.medium -d “processor.pl 192.168.7.77 [10.219.1.2 neon.scm]”

As result, 3 instances will boot up, download the specified script ‘processor.pl’ from the file server at ‘192.168.7.77’, and convert themselves into the image-processing nodes. It installs the opens-source image-processing software GIMP and the NFS client. It performs NFS-mount to the shared directory of the collector node, whose IP is at ‘10.219.1.2’. Then, these processing nodes will start picking up image files from the shared directory and perform image-processing using GIMP according to the script ‘neon.scm’.

The syntax of the user-data for this image is:

-d “<script> <file_server_IP> [ <arguments_for_script> ]”.

IMG_1588

Now, here is one crucial design decision that compliments the elasticity of the cloud. The work-unit for the image-processing is set to be 20 images at a time. This means that each node is only allowed to grab a chunk of 20 images at a time to perform image-processing. Under this policy, the processing nodes must frequently inquire the collector node for a small amount of work, instead of pre-determining the complete workload for each processing node prior to the beginning of the processing. This approach allows more processing nodes to be added to the system at any moment, thus taking full advantage of the elasticity.

IMG_1590

When the processing nodes discover that there are no more images to be processed, they will be self-terminated, freeing up the computing resources for the cloud. When the collector node learns that all the images have been processed, it wakes up and encodes the images to a new video file. The final AVI file will be uploaded to the storage location belongs to the cloud-user. Eucalyptus and EC2 offer S3 storage units that allow such operation, however I will skip the details for later.

IMG_1592

This prototype demonstrates how a complex operation, such as distributed video-processing, can be automated using the cloud. However, the automation is just a tip of the iceberg for the cloud. The raw power of the cloud comes from the ability to instantly replicate the application in a massive scale across the world. Such capability of the cloud contributes to the recent booming development in Software-as-a-Service (SaaS) solutions.

IMG_1595

Extra. Links to Processed Videos

Using Invert Filter –

Using Edge Filter –

Using Motion Blur Filter –

Related. Links to Project Home Page –


%d bloggers like this: