Kyo Lee

Open-Source Cloud Blog

Tag: ec2

DevOps Culture — Fail Fast on Eucalyptus

At a meetup event down in San Diego, California, Eucalyptus had a chance to meet Sander van Zoest (@svanzoest), the VP of technology at OneHealth (, who is also the organizer of the San Diego DevOps group ( Sander and his team at OneHealth have been using Eucalyptus cloud for some time. Asked why OneHealth runs Eucalyptus in-house, Sander had some interesting stories to say about dealing with health-related data and the company’s DevOps engineering culture.


Due to the strict regulations on Protected Health Information (PHI), OneHealth needs to take extra strong measures if they are to provide the services on AWS; Sander spent a good amount of time explaining to us how demanding it is to satisfy the regulations. Such barriers make things complicated to push any personal identifiable health information to the cloud.

For the AWS case, the very specific barrier was that AWS provides no legal protection when storing sensitive data in the cloud storage space. For instance, it is required by HIPAA and HITECH regulations that One Health needs to be able to promise a 72-hour response time to inform their customers about the breach of the data, should it ever happen, and provide an ETA to identify and patch the security hole that caused the breach.

Sander points out that at the moment, AWS does not guarantee such protections/services. For this reason, OneHealth’s production environment is deployed at Rackspace’s co-location since it provides HIPAA Business Associate Addendums. However, it is noted that given the evolving nature of the public cloud, it is very “cloudy” to predict how things are going to change in the near future. The recent announcement by AWS on CloudHSM ( — although it doesn’t cover the legal protection — is a good indicator showing AWS’s interest in providing secure storage service as moving forward.


What this uncertain, “cloudy” future means for engineering at OneHealth is employing a variety of infrastructure environments to take advantage of each platform while staying flexible. It becomes essential to design OneHealth’s services and applications to be deployable on bare-metal systems at Rackspace (production environment), AWS (sandbox/staging environment), Eucalyptus (in-house continuous integration and testing environment), and engineers’ laptops using Vagrant (development and testing environment).  (

Under such heterogeneous systems, from its production down to the engineer’s laptop, the development environment — the OS, dependencies, configurations, etc — needs to be kept uniform via virtualization and automation, allowing seamless pushing of new code from the laptop up to the production. For handling the life cycle of machines and VM instances, the engineers at OneHealth are big fans of Chef (, which makes the configuration management portable on any infrastructure platforms. For virtual machines, the instance images are prepared via debian preseed files while leveraging a open source tool VeeWee (


At OneHealth, the philosophy of DevOps is deeply embedded in every aspect of its development and operation. The concept of DevOps was not new to many engineers who brought in the ideas of “Infrastructure as Code” and “Commit Often and Fail Fast” from previous companies such as and Joost.

Speaking of DevOps culture, one fun fact Sander mentioned — which goes against intuition for many traditional IT shops — was that the operation team at OneHealth likes to take down the instances and rebuild them regularly. The recycling of the instances ensures the “freshness” of the deployed services and applications. The operation engineers should be more concerned if an instance’s uptime was longer than, say, 30 days because it meant that the content of the instance was outdated, possibly containing unfixed bugs or security issues. If the deployment setup was doing what it was supposed to be doing, then it should have killed the outdated instance and brought up a new instance with the latest updates.

The same goes for the development environment. It would be much better to refresh the dev environment instances with frequent relaunching and reconstructing than having the developers working on a stale dev environment, which turns out to be more harmful for the development. Plus, this destroy-and-rebuild enforcement encourages the developers to consistently check in the code to a version-controlled code repository, allowing early detection of conflicts in code.

All of these procedures, bringing together datacenter automation and configuration management, are part of a very new movement in software development now labeled as “DevOps”. The DevOps folks often joke around and say even a few years ago, the terminology didn’t even exist, but now, DevOps has become the most sought-after practice in IT. All thanks to the wide spread of cloud computing, giving birth to the programmable infrastructure.



Introducing Metaleuca


What is Metaleuca?

Metaleuca is a bare-metal provision management system that interacts with open-source software Cobbler via EC2-like CLI.

Using Metaleuca, users can communicate with Cobbler to self-provision a group of bare-metal machines to boot up with new, fresh OS images. The main appeal of Metaleuca is that it allows users to manage the bare-metal machines like EC2’s virtual instances via the command-lines that feel much like ec2-tools, or euca2ools.



Metaleuca Command-Line Tools

Metaleuca consists of a set of command-lines that mirror some of the command-lines in ec2-tool or euca2ools. The list below shows a number of the core command-lines used in Metaleuca:

  • metaleuca-describe-profiles – Describe all the profiles provided in Cobbler
  • metaleuca-describe-systems – Describe all the bare-metal systems registered in Cobbler
  • metaleuca-reboot-system – Reboot the selected bare-metal system
  • metaleuca-run-instances – Initiate the provision sequence on the selected bare-metal systems
  • metaleuca-describe-instances – Describe the statuses of the provisioned bare-metal systems
  • metaleuca-terminate-instances – Terminate the bare-metal systems, returning them back to the resource pool

Metaleuca Configuration

Prior to installing Metaleuca, it is required that you have already configured Cobbler to provision a group of bare-metal machines in your datacenter. If you are new to Cobbler, please visit the Cobbler’s homepage for more information on how to set up Cobbler.

Once you have Cobbler running in the datacenter, you will need to install Metaleuca on a Ubuntu machine, or virtual machine. The installation guide for Metaleuca is provided at:

Metaleuca Walkthrough

Now that you have Metaleuca configured in the datacenter, let’s go over a scenario where you will want to launch two bare-metal instances with fresh CentOS 6.3 images.

First, you might want to use the command “metaleuca-describe-systems” to survey all the available systems registered in Cobbler.

Picture 3.png 03-30-32-133

Metaleuca allows users to directly select which bare-metal machines to provision — by using the machines’ IPs. However, those who are familiar with AWS will contest that this approach is not how virtual machines are provisioned on EC2; rather than specifying the IPs, the AWS users are to provide the number of instances to launch. For this reason, here, we will cover the EC2’s approach to provision the instances.

In Metaleuca, first you will need to find out which ‘profile‘ is set to install the CentOS 6.3 image. In Cobbler, a profile maps to a preconfigured ‘kickstart‘ file that contains the netboot instructions on which OS to install when a machine boots up and initiates PXEBOOT. In other words, you may compare the profiles on Cobbler to the instance images on EC2. In Metaleuca, you can display the available profiles using the command ‘metaleuca-describe-profiles‘:

Picture 7

Let’s say that the profile “qa-centos6u3-x86_64-striped-drives” is what we want to use.

Next, you will want to determine which “system-group” you want the machines to be selected from. In Metaleuca, the bare metal machines can be grouped into different resource pools. For instance, in our QA system at Eucalyptus, which utilizes Metaleuca, we partitioned the machines in the datacenter into 6 groups: qa00, qa01, dev00, dev01, test00, and test01. Such grouping allows us to provide semantics on the machine pools based on their usages, in which we created the resource allocation policy for the users, who are mainly developers and QA engineers. The command “metaleuca-describe-system-groups” displays all the machines and the groups accordingly:

Picture 8

However, keep in mind that not all machines in the list will be available to be provisioned; some of the machines might be in use by other users. Thus, you will want to run the command “metaleuca-describe-system-groups -f” to discover which machines are free to use. Fortunately, at this moment, among those 6 system-groups mentioned above, the group ‘test01′ has 2 machines available, which are labeled as “FREED” in the screen shot below:

Picture 6

Once you figure out the profile and the system-group availability information, you are ready to provision the bare-metal machines. The command “metaleuca-run-instances” takes input of how many instances you want to launch, which profile to use, which group to call from, and finally, the user string to mark the machines:

./metaleuca-run-instances -n 2 -g test01 -p qa-centos6u3-x86_64-striped-drives -u kyo_machines_for_demo

Picture 10

And, similar to ec2-tools and euca2ools, you can monitor the progress of the provisioned machines using the command “metaleuca-describe-instances -u“:

Picture 11

Notice that the instances are at the “pending” state at the moment. Soon, in about 8 minutes after launching, the instances will be shown as “running”:

Picture 12

At that point, you may ssh into the bare metal machines to verify that they are up and running with the fresh CentOS 6.3 OS installed.

Later, the command “metaleuca-describe-system-user -u” comes in handy when you want to find out which machines are provisioned under your name:

Picture 13

When you are done with the machines, you may “free” the machines so that they can return to the resource pool; the command for that case is “metaleuca-terminate-instances -u“:Picture 14

When running the command “metaleuca-describe-instance -u“, you will notice that the machines have been successfully freed:

Picture 15

Metaleuca as an Open Source Project


Metaleuca was evolved out of the internal usage — the development and test environment for engineers — at Eucalyptus, and is now available as an open source project on Github under the Apache license. The goal of the project is to complete the integration of Metaleuca into the Eucalyptus system so that it can be served as a “bare-metal only” zone in Eucalyptus. Your contribution is much appreciated!

Check out the project at:


A Developer Walks through Cloud

Skip Directly to [Instruction on How to Run the Video Processing Prototype]

A Developer Walks through Cloud


1. Little Phone, Big Cloud

A few months ago, a phrase caught my attention: “Instagram for Video”. It was an interesting idea for a mobile application. As a software designer, I dug into the idea, soon to realize one major implementation challenge.


It turns out that video is a collection of pictures–many, many pictures. Given the standard 24 frames-per-second rate, even an one-minute-long video would be comprised of 1440 pictures, which meant image-processing of 1440 pictures on a mobile phone. That is a lot of pictures for a small battery in your mobile phone to handle.


There is an alternative way to the scenario; let’s consider moving the image-processing task over to a remote machine that is bigger, stronger, and meaner. In this scenario, the mobile phone could upload the video to a server via the internet, process it remotely, and retrieve the processed video back in a seamless fashion.


However, there is one absolutely-crucial requirement in this scenario; we are going to need a big, big, big machine–big enough to handle millions of requests once this killer application goes viral (go big or go home). There is only one answer to this type of demand: “the Cloud.”

Luckily, there is an open-source cloud available; Eucalyptus is an open-source Infrastructure-as-a-Service cloud platform whose APIs are compatible with the ones with Amazon’s EC2. This makes Eucalyptus an ideal in-house cloud application development platform. It guarantees that once my killer application runs on Eucalyptus, it will also run on EC2 with no modifications required, thus creating a truly portable cloud application with the world-wide deployability.

2. IaaS Cloud

For those who are not familiar with the IaaS clouds, let me to take you to a quick walkthrough to the cloud.

Eucalyptus and Amazon’s EC2 offer “Infrastructure-as-a-Service” cloud platforms. It means that a cloud-user can request, “Hey cloud, I need 5 machines with full network connectivity and access to the storage,” then within minutes, the user will have the complete system ready for use.


Take this concept little further; instead of requesting machines for generic purposes, the cloud-user could have specified which machines to serve as what purposes at the creation. For instance, using the example in this article, the cloud-user could have asked, “Hey cloud, I want one machine to work as a collector while the rest as image-processors, and have them process my cat video immediately!” Then, the cloud would have brought up a network of machines with the specific tasks assigned to each machine, and they would have worked on processing of the cat video right away. Once the processing was complete, the machines would have been self-terminated, leaving only the processed cat video behind.


3. App on the Cloud

Let’s go back to the video-processing application on the cloud. Here I will cover some major design considerations when developing applications on the cloud.

3.1. Parallelism and Elasticity

Designing an application on a distributed system requires a process to be broken down into small tasks. Then, one must identify the tasks that can bring parallelism into the process. In this video-processing application, the process can be broken down into 3 major steps: decoding the video into images, processing the images, and encoding the processed images back to a video. Given these breakdowns, the natural approach is to distribute the image-processing task over multiple machines and assign a single machine to perform the encoding and decoding.


One important characteristic of the cloud that you must realize at the core of the design is the elasticity of the cloud. The elasticity is what differentiates the cloud applications from the traditional distributed applications. Traditionally, in a distributed computing environment, the number of nodes N in the system is a static value that is unchangeable during a job. However, in the cloud environment, there is no bound in the number N, theoretically the number N is limitless. This means that at any given point during the job, the system should expect the number N to grow, or even shrink in some cases. For instance, in our video-processing application, we could initially start with 5 machines assigned to be image-processing nodes, however in the middle of the processing, we should be able to add 5 more nodes to boost the productivity. Taking such advantage of the elasticity must be considered at the design level of the application.


3.2. Prototype

Following is the overview of the prototype of the video-processing application in the cloud.

For more detailed instructions, please go to the page [Instruction on How to Run the Video Processing Prototype]

The goal of the prototype is to demonstrate a cloud application that performs image-processing tasks in a distributed fashion. The application takes input of a video file, performs image-processing in parallel, and when it terminates, the processed video file is stored in a known storage location provided.

For the simplicity of the prototype, let’s assume that there is a machine that works as a file server that has an apache web-service running in the open, which is accessible from the cloud. In other words, any virtual instances(nodes) spawned on the cloud will have access to the files on the file server via download(wget). Given this setup, for instance, when we trigger the collector node, it can download the input video file from the file server to start the process.


For the prototype, we need to construct two types of nodes: collector node and image-process node. However, before I go into further details, I must explain what takes places when the cloud-user requests an instance from the IasS cloud.

When the cloud-user asks the cloud, “Hey cloud, I need one machine,” the user is required to specify the image of the machine. In other words, the cloud-user must request, “Hey cloud, I need one machine with the RHEL 6.1 image that I have prepared for this video-processing prototype.” Then, the cloud will bring up a virtual instance that is flashed with the specified RHEL 6.1 image. Since users can prepare and upload images of their choices to the cloud, the possibilities are limitless on what you want the instances to do or to become.


For this particular prototype, I prepared a single image that would be used by all nodes. I took a generic Ubuntu Karmic image as the base image and modified its ‘rc.local’ script, which is the default script that gets executed automatically when the image boots up. The modified ‘rc.local’ script is set to read a line from the ‘user-data’ field, which get passed to the instance from the cloud-user at the creation. This small modification allows me to control the rolls of the instances with having only one image. For example, I can request, “Hey cloud, I want one instance with my special Ubuntu image and have it run the script ‘'”, then later, I can ask, “Hey cloud, I want another instance with the same image, but this one will run the script ‘’.”

The requests in the example above would look like the below. Notice using the same image ID ’emi-9BD01749′, but different ‘user-data’ values (-d).

First request to bring up a collector:

euca-run-instances emi-9BD01749 -k mykey0 -n 1 -g group0 -t c1.medium -d “”

Second request to bring up a processor:

euca-run-instances emi-9BD01749 -k mykey0 -n 1 -g group0 -t c1.medium -d “”


In the prototype, the actual requests contain more information than just a script name. The first request looks like,

euca-run-instances emi-9BD01749 -k mykey0 -n 1 -g group0 -t c1.medium -d “ [lovemycat.avi]”

This command translates to, after the instance boots up, it downloads the specified script ‘’ from the file server at ‘’ via wget and execute the script. The purpose of the script ‘’ is to turn the instance into the collector node for the video-processing application. First, the script installs all the necessary softwares via apt-get commands in Ubuntu; it uses various open-source softwares for the encoding and decoding tasks. It also installs the NFS server to create a shared directory where the processing nodes can access. Second, it downloads the target video file ‘lovemycat.avi’ from the file server at ‘’ (for the convenience of the prototype, the file server is designed to provide all the external file resources to the instances). Then, the collector node decodes the avi file into a collection of JPEG images. These image files are stored in the shared directory opened up by the NFS server. Now, the collector node waits for the image files to be processed by the processing nodes. The collector node’s job is to periodically scan the shared directory for the progress.


After the collector node enters the stage where it idles and scans, the next step is to start a group of the processing nodes by requesting,

euca-run-instances emi-9BD01749 -k mykey0 -n 3 -g group0 -t c1.medium -d “ [ neon.scm]”

As result, 3 instances will boot up, download the specified script ‘’ from the file server at ‘’, and convert themselves into the image-processing nodes. It installs the opens-source image-processing software GIMP and the NFS client. It performs NFS-mount to the shared directory of the collector node, whose IP is at ‘’. Then, these processing nodes will start picking up image files from the shared directory and perform image-processing using GIMP according to the script ‘neon.scm’.

The syntax of the user-data for this image is:

-d “<script> <file_server_IP> [ <arguments_for_script> ]”.


Now, here is one crucial design decision that compliments the elasticity of the cloud. The work-unit for the image-processing is set to be 20 images at a time. This means that each node is only allowed to grab a chunk of 20 images at a time to perform image-processing. Under this policy, the processing nodes must frequently inquire the collector node for a small amount of work, instead of pre-determining the complete workload for each processing node prior to the beginning of the processing. This approach allows more processing nodes to be added to the system at any moment, thus taking full advantage of the elasticity.


When the processing nodes discover that there are no more images to be processed, they will be self-terminated, freeing up the computing resources for the cloud. When the collector node learns that all the images have been processed, it wakes up and encodes the images to a new video file. The final AVI file will be uploaded to the storage location belongs to the cloud-user. Eucalyptus and EC2 offer S3 storage units that allow such operation, however I will skip the details for later.


This prototype demonstrates how a complex operation, such as distributed video-processing, can be automated using the cloud. However, the automation is just a tip of the iceberg for the cloud. The raw power of the cloud comes from the ability to instantly replicate the application in a massive scale across the world. Such capability of the cloud contributes to the recent booming development in Software-as-a-Service (SaaS) solutions.


Extra. Links to Processed Videos

Using Invert Filter –

Using Edge Filter –

Using Motion Blur Filter –

Related. Links to Project Home Page –

%d bloggers like this: