Kyo Lee

Open-Source Cloud Blog

Category: cloud computing

Docker Strange Dev: How I learned to stop worrying and love the VM

A couple of months ago, my laptop died. I remember the day I brought it home for the first time. Still in college. I made a big decision—despite of my marginal finances—to go with the top model. “It should last at least five years,” I convinced myself. Seven years later, I got the news from the certified Apple repair person, “Well, you should just buy a new laptop than trying to fix this one.” So I did.

Now, I have a new laptop, which is currently placed on a pedestal. Watching how fast applications open and close on this laptop makes me cry. The future has arrived. I tell myself, “No! I will not f*** this one up this time! I will never install any software on this laptop ever!” Of course, I am in denial. This pure awesomeness will eventually decay. I predict the rate of decay exponentially accelerated by the number of software installed on the laptop (Disclaimer: No supporting data exists for this assertion—other than my paranoia).

I begin searching for the answer to the quest: Install no software. So I try Docker.

Docker_logo

The noticeable difference between using Docker, which runs Linux container, and using the virtual machine (VM) is speed.

Both options provide the ability to isolate software’s running environment from the base operating system. Thus both deliver the desirable paradigm: Build once and run everywhere. And both address my concern—that the base operation system must remain minimal and untouched. However, Docker stands out from the classic VM approach by allowing the cloud application—which is a virtual image with desired software installed—to run within milliseconds. Compare that to the minutes of time it takes to boot a VM instance.

Speed_Racer

What significance does this difference make?

Docker removes a big chunk of mental roadblock for developers. Thanks to Docker’s superior response time, developers can barely identify the perceptual distinction between running applications on a virtual image (a virtual container, to be precise) and running applications on the base OS. In addition to its responsiveness, Docker appeals to the developers by obsoleting the tedious procedures of booting the VM instance and managing the instance’s life cycle. Unlike previous VM-centric approach, Docker embraces the application-centric design principal. Docker’s command line interface (CLI) below asks us two simple questions:

1. Which image to use?

2. What command to run?

docker run [options] <image> <command>

With Docker, I can type single-line commands to run cloud applications, similar to other Linux commands. Then I will have my databases servers running. I will have my web servers running. I will have my API servers running. And I will have my file servers running. All these are done by using single-line commands. The entire web-stack can now be running on my laptop within seconds. The best part of all this? When I’m done, there will be no trace left on my laptop—as if they never existed. The pure awesomeness prevails.

docker-whaleeuca_new_logo

Running Eucalyptus Console on Docker

 

For those who want to check out Eucalyptus Console to access Amazon Web Services, here are the steps to launch Eucalyptus Console using Docker on OS X.

Step 1. Install Docker on your laptop

Here is a great link that walks you through how to install Docker on OS X:

http://docs.docker.com/installation/mac/

Step 2. Pull Eucalyptus Console Docker image repository

Run the command below to pull Eucalyptus Console Docker images (it will take some time to download about 1.5 G image files):

docker pull kyolee310/eucaconsole

Run the command below to verify that the eucaconsole images have been pulled:

docker images

Screen Shot 2014-09-14 at 10.52.11 PMFor those who want to build the images from scratch, here is the link to the Dockerfile used:

https://github.com/eucalyptus/dockereuca

Step 3.  Update Docker VM’s clock

When running Docker on OS X, make sure that OS X’s clock is synchronized properly. A skewed clock can cause problems for some applications on Docker. In order to fix this issue, you will need to log into the Docker VM and synchronize the clock manually.

You can SSH into Docker VM using the command below:

boot2docker ssh

Once logged in, run the command below to sync the clock:

sudo ntpclient -s -h pool.ntp.org

Run the command below to verify that the clock has been sync’ed

date

One more patching work to do is to create an empty “/etc/localtime” file so that you can link your OS X’s localtime file to Docker VM’s localtime file at runtime:

sudo touch /etc/localtime

Exit the SSH session:

exit

This issue is being tracked here:

https://github.com/boot2docker/boot2docker/issues/476

Screen Shot 2014-09-14 at 10.53.04 PM

Step 4. Launch Eucalyptus Console via Docker

Run the command below to launch Eucalyptus Console on Docker:

docker run -i -t -v /etc/localtime:/etc/localtime:ro -p 8888:8888 kyolee310/eucaconsole:package-4.0 bash

It’s a shame, but running a live “bash” session is not Docker’s way of doing things, but excuse me for the moment until I figure out how to run Eucalyptus Console properly without using the “service” command.

The command above will open a bash shell session for the eucaconsole image, then run the command below to launch Eucalyptus Console:

service eucaconsole start

Step 5.  Open Eucalyptus Console on a browser

Run the command below to find out the IP of Docker VM:

boot2docker ip

, whose output would look like:

    The VM’s Host only interface IP address is: 192.168.59.103

Using the IP above, access Eucalyptus Console at port 8888:

ex. http://192.168.59.103:8888/

Screen Shot 2014-09-14 at 8.54.09 PM

In order to access AWS, you will need to obtain your AWS access key and secret key. Here is the link by AWS on howto:

http://docs.aws.amazon.com/AWSSimpleQueueService/latest/SQSGettingStartedGuide/AWSCredentials.html

Step 6. Log into AWS using Eucalyptus Console

Screen Shot 2014-09-14 at 8.54.28 PM

Advertisements

DevOps Culture — Fail Fast on Eucalyptus

sandiegozoo
At a meetup event down in San Diego, California, Eucalyptus had a chance to meet Sander van Zoest (@svanzoest), the VP of technology at OneHealth (http://www.onehealth.com/), who is also the organizer of the San Diego DevOps group (http://www.meetup.com/sddevops/). Sander and his team at OneHealth have been using Eucalyptus cloud for some time. Asked why OneHealth runs Eucalyptus in-house, Sander had some interesting stories to say about dealing with health-related data and the company’s DevOps engineering culture.

onehealth

Due to the strict regulations on Protected Health Information (PHI), OneHealth needs to take extra strong measures if they are to provide the services on AWS; Sander spent a good amount of time explaining to us how demanding it is to satisfy the regulations. Such barriers make things complicated to push any personal identifiable health information to the cloud.

For the AWS case, the very specific barrier was that AWS provides no legal protection when storing sensitive data in the cloud storage space. For instance, it is required by HIPAA and HITECH regulations that One Health needs to be able to promise a 72-hour response time to inform their customers about the breach of the data, should it ever happen, and provide an ETA to identify and patch the security hole that caused the breach.

Sander points out that at the moment, AWS does not guarantee such protections/services. For this reason, OneHealth’s production environment is deployed at Rackspace’s co-location since it provides HIPAA Business Associate Addendums. However, it is noted that given the evolving nature of the public cloud, it is very “cloudy” to predict how things are going to change in the near future. The recent announcement by AWS on CloudHSM (http://aws.typepad.com/aws/2013/03/aws-cloud-hsm-secure-key-storage-and-cryptographic-operations.html) — although it doesn’t cover the legal protection — is a good indicator showing AWS’s interest in providing secure storage service as moving forward.

cloudy2

What this uncertain, “cloudy” future means for engineering at OneHealth is employing a variety of infrastructure environments to take advantage of each platform while staying flexible. It becomes essential to design OneHealth’s services and applications to be deployable on bare-metal systems at Rackspace (production environment), AWS (sandbox/staging environment), Eucalyptus (in-house continuous integration and testing environment), and engineers’ laptops using Vagrant (development and testing environment).  (http://www.vagrantup.com/).

Under such heterogeneous systems, from its production down to the engineer’s laptop, the development environment — the OS, dependencies, configurations, etc — needs to be kept uniform via virtualization and automation, allowing seamless pushing of new code from the laptop up to the production. For handling the life cycle of machines and VM instances, the engineers at OneHealth are big fans of Chef (http://www.opscode.com/chef/), which makes the configuration management portable on any infrastructure platforms. For virtual machines, the instance images are prepared via debian preseed files while leveraging a open source tool VeeWee (https://github.com/jedi4ever/veewee).

chefchef_icon

At OneHealth, the philosophy of DevOps is deeply embedded in every aspect of its development and operation. The concept of DevOps was not new to many engineers who brought in the ideas of “Infrastructure as Code” and “Commit Often and Fail Fast” from previous companies such as MP3.com and Joost.

Speaking of DevOps culture, one fun fact Sander mentioned — which goes against intuition for many traditional IT shops — was that the operation team at OneHealth likes to take down the instances and rebuild them regularly. The recycling of the instances ensures the “freshness” of the deployed services and applications. The operation engineers should be more concerned if an instance’s uptime was longer than, say, 30 days because it meant that the content of the instance was outdated, possibly containing unfixed bugs or security issues. If the deployment setup was doing what it was supposed to be doing, then it should have killed the outdated instance and brought up a new instance with the latest updates.

The same goes for the development environment. It would be much better to refresh the dev environment instances with frequent relaunching and reconstructing than having the developers working on a stale dev environment, which turns out to be more harmful for the development. Plus, this destroy-and-rebuild enforcement encourages the developers to consistently check in the code to a version-controlled code repository, allowing early detection of conflicts in code.

All of these procedures, bringing together datacenter automation and configuration management, are part of a very new movement in software development now labeled as “DevOps”. The DevOps folks often joke around and say even a few years ago, the terminology didn’t even exist, but now, DevOps has become the most sought-after practice in IT. All thanks to the wide spread of cloud computing, giving birth to the programmable infrastructure.

euca_new_logo

Introducing Metaleuca

Nuclear-Devil-Horns

What is Metaleuca?

Metaleuca is a bare-metal provision management system that interacts with open-source software Cobbler via EC2-like CLI.

Using Metaleuca, users can communicate with Cobbler to self-provision a group of bare-metal machines to boot up with new, fresh OS images. The main appeal of Metaleuca is that it allows users to manage the bare-metal machines like EC2’s virtual instances via the command-lines that feel much like ec2-tools, or euca2ools.

euca_new_logo

ec2_logocobbler_logo

Metaleuca Command-Line Tools

Metaleuca consists of a set of command-lines that mirror some of the command-lines in ec2-tool or euca2ools. The list below shows a number of the core command-lines used in Metaleuca:

  • metaleuca-describe-profiles – Describe all the profiles provided in Cobbler
  • metaleuca-describe-systems – Describe all the bare-metal systems registered in Cobbler
  • metaleuca-reboot-system – Reboot the selected bare-metal system
  • metaleuca-run-instances – Initiate the provision sequence on the selected bare-metal systems
  • metaleuca-describe-instances – Describe the statuses of the provisioned bare-metal systems
  • metaleuca-terminate-instances – Terminate the bare-metal systems, returning them back to the resource pool

Metaleuca Configuration

Prior to installing Metaleuca, it is required that you have already configured Cobbler to provision a group of bare-metal machines in your datacenter. If you are new to Cobbler, please visit the Cobbler’s homepage http://cobbler.github.com/ for more information on how to set up Cobbler.

Once you have Cobbler running in the datacenter, you will need to install Metaleuca on a Ubuntu machine, or virtual machine. The installation guide for Metaleuca is provided at:

https://github.com/eucalyptus/metaleuca/wiki/Metaleuca-Installation-Guide

Metaleuca Walkthrough

Now that you have Metaleuca configured in the datacenter, let’s go over a scenario where you will want to launch two bare-metal instances with fresh CentOS 6.3 images.

First, you might want to use the command “metaleuca-describe-systems” to survey all the available systems registered in Cobbler.

Picture 3.png 03-30-32-133

Metaleuca allows users to directly select which bare-metal machines to provision — by using the machines’ IPs. However, those who are familiar with AWS will contest that this approach is not how virtual machines are provisioned on EC2; rather than specifying the IPs, the AWS users are to provide the number of instances to launch. For this reason, here, we will cover the EC2’s approach to provision the instances.

In Metaleuca, first you will need to find out which ‘profile‘ is set to install the CentOS 6.3 image. In Cobbler, a profile maps to a preconfigured ‘kickstart‘ file that contains the netboot instructions on which OS to install when a machine boots up and initiates PXEBOOT. In other words, you may compare the profiles on Cobbler to the instance images on EC2. In Metaleuca, you can display the available profiles using the command ‘metaleuca-describe-profiles‘:

Picture 7

Let’s say that the profile “qa-centos6u3-x86_64-striped-drives” is what we want to use.

Next, you will want to determine which “system-group” you want the machines to be selected from. In Metaleuca, the bare metal machines can be grouped into different resource pools. For instance, in our QA system at Eucalyptus, which utilizes Metaleuca, we partitioned the machines in the datacenter into 6 groups: qa00, qa01, dev00, dev01, test00, and test01. Such grouping allows us to provide semantics on the machine pools based on their usages, in which we created the resource allocation policy for the users, who are mainly developers and QA engineers. The command “metaleuca-describe-system-groups” displays all the machines and the groups accordingly:

Picture 8

However, keep in mind that not all machines in the list will be available to be provisioned; some of the machines might be in use by other users. Thus, you will want to run the command “metaleuca-describe-system-groups -f” to discover which machines are free to use. Fortunately, at this moment, among those 6 system-groups mentioned above, the group ‘test01′ has 2 machines available, which are labeled as “FREED” in the screen shot below:

Picture 6

Once you figure out the profile and the system-group availability information, you are ready to provision the bare-metal machines. The command “metaleuca-run-instances” takes input of how many instances you want to launch, which profile to use, which group to call from, and finally, the user string to mark the machines:

./metaleuca-run-instances -n 2 -g test01 -p qa-centos6u3-x86_64-striped-drives -u kyo_machines_for_demo

Picture 10

And, similar to ec2-tools and euca2ools, you can monitor the progress of the provisioned machines using the command “metaleuca-describe-instances -u“:

Picture 11

Notice that the instances are at the “pending” state at the moment. Soon, in about 8 minutes after launching, the instances will be shown as “running”:

Picture 12

At that point, you may ssh into the bare metal machines to verify that they are up and running with the fresh CentOS 6.3 OS installed.

Later, the command “metaleuca-describe-system-user -u” comes in handy when you want to find out which machines are provisioned under your name:

Picture 13

When you are done with the machines, you may “free” the machines so that they can return to the resource pool; the command for that case is “metaleuca-terminate-instances -u“:Picture 14

When running the command “metaleuca-describe-instance -u“, you will notice that the machines have been successfully freed:

Picture 15

Metaleuca as an Open Source Project

BARE METAL FOIL2

Metaleuca was evolved out of the internal usage — the development and test environment for engineers — at Eucalyptus, and is now available as an open source project on Github under the Apache license. The goal of the project is to complete the integration of Metaleuca into the Eucalyptus system so that it can be served as a “bare-metal only” zone in Eucalyptus. Your contribution is much appreciated!

Check out the project at:

https://github.com/eucalyptus/metaleuca

\m/

Beyond Continuous Integration: Locking Steps with Dev, QA, and Release

Continuous integration: the practice of frequently integrating one’s new or changed code with the existing code repository [wikipedia]

In this blog we will talk about how the continuous integration process was put in place for the new component, Eucalyptus User Console, in order to collaborate the efforts among the dev, QA and release teams throughout the development cycle of Eucalyptus 3.2.

Backgrounduserconsoleconponentview

Eucalyptus User Console is a newly introduced component in Eucalyptus, whose main goal is to provide an easy-to-use, intuitive browser-based interface to the cloud users, thus assisting in the dev/test cloud deployments among IT organizations and enterprises. Eucalyptus User Console consists of two components: javascript-based client-side application and Tornado-based user console proxy server.

Early Involvement

The first phase of the development was to come up with a quick prototype to demonstrate how the user console would work under the given initial design of the architecture (see the Eucalyptus Console components layout diagram above). As soon as the prototype was evaluated and its feasibility was verified, the release team started creating the packages for two major Linux OS platforms: Ubuntu and Centos/RHEL.

The early involvement of the release team turned out to be the best help any developers or QA engineers could ask for; since the very beginning stage of the development, the release team was able to provide invaluable information that served as guardrail for the fast-moving development. Such information included advising on how the files should be named and organized and identifying which dependencies should or should not be used in order to meet the requirements for various Linux distributions. Dealing with such issues at the later stage of the development would have been undoubtedly a major pain in the back-end.

jenkins_logo

Further more, the release team was able to ensure that the development of the new user console would never go off the track against the Linux distro requirements by setting up the automated daily package-building process using Jenkins — which utilizes the VM resources from our Release cloud that runs on Eucalyptus.

Keeping Up With Eucalyptus

Setting up the automated process to build the packages would allow the release team to keep an eye on the progress of the user console’s development in terms of the ability to build the packages according to the constraints set by the Linux distributions. However, it would not guarantee whether the newly built packages contain the version of the user console that works with the current, up-to-date Eucalyptus cloud that was also in development.

Thus, the challenge was to ensure that the latest built user console packages work with the latest built Eucalyptus throughout the development.

In order to solve this issue, the QA team created a testunit that automatically installs the latest user console packages on a newly built Eucalyptus. Then, the testunit was added to the main test sequences used by the Eucalyptus 3.2 development in our automated QA system, making the installation of the latest user console packages accessible by all developers at Eucalyptus.

This setup encouraged a failure in the user console package installation to be seen by any developers throughout the development, thus allowing the failure to be detected fast and reported with quickness.

Screen shot 2012-12-10 at 5.50.02 AM

The testunit ui_setup can be seen in action above in the table which displays the results of the test sequence ran by the automated QA system.. Check out the link below for more details of this testunit:

https://github.com/eucalyptus-qa/ui_setup

Circle of Trust

As the user console evolved out of its prototype state and took the form of a more product-like shape, the QA team was working in parallel, figuring out how to set up the automated testing process for the user console. The blog here talks in detail about how Selenium was used to create the automated web-browser testing tools, se34euca.

big-logo

In the mid-stage of the development, as the features of the user console started functioning in reasonably stable manners, 3 automated tests were added — incrementally — to ensure that the working state of the user console throughout the development.

Screen shot 2012-12-10 at 6.41.28 AMThose 3 tests are:

  1. user_console_view_page_testhttps://github.com/eucalyptus-qa/user_console_view_page_test
  2. user_console_generate_keypair_testhttps://github.com/eucalyptus-qa/user_console_generate_keypair_test
  3. user_console_launch_instance_test https://github.com/eucalyptus-qa/user_console_launch_instance_test

These automated tests were to ask the 3 simple questions below on a daily basis:

  1. Can the user log in and see all the landing pages on the latest user console?
  2. Can the user generate a new keypair using the latest user console?
  3. Can the user launch a VM instance using the latest user console?

Of course, it would be possible, and desirable, to ask more questions in a more complicated fashion. However, during the rapid development phase, asking those 3 simple questions on a daily basis, turned out to be sufficient, and effective, to understand whether something terrible had happened to the user console or not.

traffic_light

The goal of these automated tests at this stage of the development was not to detect every little defect in the product. Not too soon at the moment.

The main purpose is rather to serve as an indicator for the developers, QA engineers, and release engineers to assure ourselves that the change that went in the code earlier today did not ruin the delicate trust among the three groups, meaning that the build, installation, and configuration procedures are still in tact. Having such assurance in check by mechanical means has made the three groups extremely effective in discovering issues during the development since it allowed each member to narrow down exactly what was responsible for the defects in a finely reduced time frame, which was in hours, rather than days or weeks.

Guardrail For Development

Having the automated package build process and the automated installation/configuration process in place at the early stage of the development was proven to be extremely useful; rather than agreeing on the written procedures, the dev, QA, and release team materialized such agreements into the actual implementation, and put them into work by using various automated mechanics that run on a daily basis. Therefore, throughout the development, we were able to witness and assure ourselves that we were making progress in accordance with the plan and our self-imposed restrictions.

Check out the Eucalyptus Open QA webpage to see the continuous integration at Eucalyptus in action:

Eucalyptus Open QA (beta) – http://ec2-50-112-61-121.us-west-2.compute.amazonaws.com/open_qa.php

Cloud App. Design: Create a Flexible Automated Web-UI Testing Tool using Selenium

In this article, I will go over the technical details on how in Eucalyptus, we used Selenium to simulate a large number of cloud user workload in order to ensure the quality of user experience in the new Eucalyptus user console.

As covered in my previous blog, Eucalyptus is coming out with a new user console, that is browser-based and intuitive to use, thus playing a key role in promoting the cloud adoption among IT organizations and enterprises.  But, the challenge was to ensure that this brand-new user console would be ready for handling the real-world workload when released out in the wild. The answer to this challenge was to simulate the activities of 150 cloud users using Selenium, an open source tool for automating web application testing.

How to Automate an Online User

The first step was to download Selenium IDE for Firefox. Selenium IDE is a must-to-have GUI tool for automating clicks and input-submits on a web application. After installing Selenium IDE on your computer, you can start Selenium IDE from Firefox’s Tools menu:

When started, Selenium IDE opens up its own separate window:

Notice the red dot on the top-right corner of Selenium IDE. When clicked, Selenium IDE will start recording all the activities you perform on the web-browser — every link you click and every input you type on the browser will be recorded as command-lines on Selenium IDE as seen below:

What Selenium IDE allows you to do is to replay the recorded activities, such as clicking and typing, on the browser in the exact same order that they were performed.

TIP.

But, soon you will notice that when replayed on Selenium IDE, it tends to fly through all the clicks in lighting speed so that the replayed activities often result in failures — the browser and web application cannot keep up with the speed of the clicks performed by Selenium IDE.

In order to prevent such cases, you will need to manually step through the record activities and insert various “pause-and-check” points using ‘waitForElementPresent’ command. For instance, when there is a command ‘click link=Delete’, I would put ‘waitForElementPresent link=Delete’ command prior to the click command to ensure that the page will fully loaded and the link ‘Delete’ is indeed present on the page before allowing Selenium IDE to execute the command ‘click link=Delete’. Later I learned that for every ‘click’ command, it is always a good habit to throw in the ‘waitForElementPresent’ command.

After verified that the recorded action is repeatable via Selenium IDE at its full speed, the next step is to export the action into a Selenium Python WebDriver format:

The result of the export above is a script file that describes the recorded Selenium action in a Python’s unittest format:

Once have the script exported, you can run the recorded action on a remote Selenium server without having to open up a web-browser. In other words, now you can simulate an online user doing the exact same recorded action on a web-browser by simply running the Python script generated by Selenium IDE.

Remote Selenium Server Configuration

Before running the script, you will need to configure a machine to run a remote Selenium server, which will behave like a web-client. The steps are on a Ubuntu machine, you will execute the following commands:

sudo apt-get -y update
sudo apt-get -y install default-jre
sudo apt-get -y install xvfb
sudo apt-get -y install firefox
sudo apt-get -y install python-pip
pip install selenium
Xvfb :0 -ac 2> /dev/null &
nohup java -jar selenium-server-standalone-2.25.0.jar &
export DISPLAY=:0


After running the commands above, you will have a Ubuntu machine capable of running the exported Python Selenium script, which then, simulates an online user opening up a Firefox browser and performing the recorded clicks.

Creating a Flexible, Reusable Testing Tool

Now, your task is to produce many exported Python Selenium scripts for all activities on the web application that will be used as building blocks for creating different user behaviors and workflows.

The first collection of Python Selenium scripts I produced was to visit every single landing page on the Eucalyptus user console. The second collection of Python Selenium scripts was to create cloud resources under the default setting. Having those two sets of Python Selenium scripts allowed me to construct complicated user interactions on the web application. For instance, with a bit of shuffling of the scripts, I could build up a user scenario where the online user would visit the keypair page, create a new keypair, visit the dashboard page, visit the security group page, create a new security group, revisit the dashboard page, and so on.

The next task was to consolidate all the scripts into one library file, getting rid of static values in the variables and breaking down the actions in the scripts into functions. Having a such unified library enables test-writers to stitch and arrange these functions together to construct whole new user scenarios as needed.

When examining the Eucalyptus user console test framework se34euca, you will see that the main library file ‘lib_euca_ui_test.py’ contains the functions that are exported from Selenium IDE, where each function describes a very specific action to perform on the web console. The files ‘testcase_*.py’ list the arrangements of those functions to form simple, or complex user behaviors. Finally, the files ‘runtest_*.py’ are the executables of those test cases that take input of the target web console environment.

Cloud Application

Now that you have a way to convert a Ubuntu machine into a Selenium server and have the Selenium test framework checked into a GitHub repository, you have a way to launch the Selenium test as a cloud instance — using se34euca as an example, the steps are:

Step 1. Launch a cloud instance on a Ubuntu image.
Step 2. Convert the Ubuntu image into a Selenium server by running the configuration commands above, or running the installer in se34euca.
Step 3. Git clone se34euca.
Step 4. Run the test case of your choice.
Step 5. Terminate the instance when the test is finished.

Of course, you can easily automate the step 2, 3, and 4 to wrap the entire process into a single scripted operation. Then, with a help of a cloud infrastructure, such as AWS or Eucalyptus, simulating 150 user can be as simple as launching 150 instances to run the script on each instance by feeding the parameter as user-data.

Code Reference

For those who are interested in creating a framework for testing your own web application, please feel free to check out the Eucalyptus user console test framework se34euca at:

https://github.com/eucalyptus/se34euca

for a reference, and leave a comment if you have any questions or suggestions.

Simulate 150 Cloud User Activities Using Open Source Tools

For the 3.2 release in this December, Eucalyptus is coming out with an intuitive, easy-to-use cloud user console, which aims to support the on-premise dev/test cloud adoption among IT organizations and enterprises.


This easy-to-use Eucalyptus User Console is consisted of two main components: a browser-side javascript application, written in JQuery, and a proxy server that utilizes Python Boto to relay requests to Eucalyptus Cloud, which is written in Python Tornado, an open source version of the scalable, non-blocking web server developed by Facebook.

The target scale for the initial version of the user console is set to handle 150 simultaneous user activities under a single user console proxy.

Now, the challenge is how to simulate these 150 users to ensure that the user consoles and the proxy are able to withstand the workload of 150 active cloud users; more importantly how to ensure that such workload is not jeopardizing the user experience on the console.

One obviously answer is to find 150 people, train them thoroughly, and ask them to participate in the load testing. After all, 150 is doable.

However, what’s not doable is that having those 150 people to repeat the process over and over during the entire life cycle of the development until the release.

Then, the most “realistic” answer is to simulate those 150 people using machines. It turns out that the machines are really good at repeating the same things over and over, and they tend to behave in a very predictable manner when tuned properly.

At Eucalyptus, we use Selenium, open source web testing automation tools, to simulate the actual user interactions on the user console.

The steps are first, use Selenium IDE on Firefox to write an automation script that completes a single path of cloud user workflow — for instance, one simple user workflow is to log into the console, create a new keypair, and log out, and another workflow to log in, create a new volume, and log out. Second, repeat the first step above for all possible use cases to ensure that all, or most, of the functionality on the console are covered, allowing all use cases to be automatically executable via Selenium IDE. Third, export those automated IDE scripts to Selenium Python WebDriver format, which allows the automated scripts to run on a remote server without needing to actually opening up a browser. Finally, create a wrapper for each exported script so that each test case can be execute as a command-line tool on Linux.

The link below contains the collection of automated Selenium WebDriver test scripts, command-line tools, and their installer for testing the Eucalyptus User Console:

Se34Euca (Selenium34Eucalyptus) – https://github.com/eucalyptus/se34euca

With Se34Euca, you can instantaneously convert any machine — or virtual machine if you are already a cloud geek 😉 — into a Eucalyptus cloud user simulator.

The steps are, on a Ubuntu image, run the commands below to install and setup Se34Euca:

sudo apt-get -y install git-core

git clone git://github.com/eucalyptus/se34euca.git

cd ./se34euca/script/

./installer_se34euca.py

Then, running the actual test can be as simple as:

export DISPLAY=:0

./runtest_view_page.py -i 192.168.51.6 -p 8888 -a ui-test-acct-00 -u user00 -w mypassword1 -t view_all_page_in_loop

The command line above will simulate a cloud user clicking through every single landing page on the user console within 2 second, then taking a rest for 5 seconds, and repeating the frantic, yet controlled clicking again and again and again.

However, funny enough, it turned out that the automated script’s ability to click through all pages on the user console within 2 seconds was well beyond the capability of a human user. The graph below renders the normal behavior of an actual human user. The X-axis in the graph shows the total length of TCP packets seen in a second on the user console proxy server machine via tcpdump.  Notice the peak in the beginning as the user logging in, and a group of little ripples that mark the user clicking buttons or viewing different pages in a 7-minute period:

And, the graph below shows the difference in the actual user behavior and the automated script behavior simulated by a single instance of Se34Euca. Notice the super-human strength of the automated script — the first half of the graph below is showing the same 7-minute period shown in the graph above. According to the graph below, the automated script is able to generate 10 times workload than a human user.

This discovery turns out to be good news; the fact that one Se34Euca instance can generate 10x human user workload, all I need to do is to launch 15 instances of Se34Euca to simulate 150 users. So, I provisioned 3 Ubuntu machines and launched 5 instances of Se34Euca on each machine:

The first fifth of the graph above covers the same period as the second graph above. What you are looking at is 15 instances of Se34Euca clicking through every single page on the Eucalyptus User Console for about two hours, starting at 21:00 mark.

When computed for averaging the packet length per second over 60 second observation period, the graph looks below:

The graph above is showing that when 150 users simultaneously logging in to the user consoles, the average packet transmission throughput rate seen on the wire is about 750Kb per second. Assuming that the user console proxy server is hooked on 1 Gig link, the throughput of 750Kb per second is certainly “doable” by all means. 😉

Then, how do we ensure the user experience of the console?

Simple. While the user console proxy server is being slammed by 150 click-monkeys, I’m opening up my own browser to verify that my interaction with the console is smooth as usual. 🙂

On my next blog, I will cover more details on the exact setup of the Eucalyptus User Console load-testing, including the selenium scripts and monitoring setup, and dig deeper into the analysis of the data. Please, stay tuned 😉

Meanwhile, feel free to check out the blog below if you would like to preview the Eucalyptus User Console for yourself:

http://coderslike.us/2012/11/11/installing-the-eucalyptus-console-from-source-and-packages/

Allow Everyone to Launch Your Web-Service: Open Source Web-Service using GIT and AWS

Open Source is the future of software.

The statement above is self-evident at this point of history — I mean, have you checked out www.microsoft.com/opensource recently?

Then, what does open-source mean when it comes to web-services?

Open Source has been a well-understood concept among the web-application community due to the neat feature “View Source” that came with almost all web-browsers. Copy-and-pasting a chunk of HTML code from one website to another was a de facto standard way of developing websites. Whether admitted it or not, web-application development is deeply rooted in open source culture.

But, where should this open source culture be heading to? Should we keep maintain the culture of copy-and-pasting others’ code with no telling? Or, rather try to embrace this open source culture and explore the possibilities that arise from sharing code?

Then, it occurred to me that, “What if we could share the entire code for web-services?”

Imagine if anyone could launch web-services such as E-bay, Craigslist, Amazon, Angie’s list, Yelp, Instagram, Twitter, etc on their own instances on AWS. Anyone could instantly replicate all the functionality and services provided by such known web-services at its micro scale. If the big name web-services are the department stores in downtown, these open-source web-services on AWS instances are the mom-and-pop stores for the local community, or they could be the black market since these services could form and disappear in a rather unpredictable manner — similar to the life cycle of AWS instances. 😉

Think about the scenario where a person can launch his/her own e-bay online auction service on an AWS instance for a school fund-raising event for a month. The person might also want to launch an instance of Craigslist for the same event, or Instagram and Twitter to keep people engaged during this period. Then, once the event is over, all the services can go away as if nothing ever happened.

Welcome to the era of Cloud Applications.

To test out the concept, I used the application “Open QA“, a web-service that is designed to display the test results of the development progress at Eucalyptus. The goal is to allow any Eucalyptus community member to launch the same service on AWS instances.

First, the web-service has to be broken down into two bodies, the service layer and the data. Then, each body of the web-service needs to be available in the open, allowing anyone to download the code and convert the AWS instances to Open QA web-service instances.

This is where GIT comes in very nicely. For public repositories, GIT will allow you create as many branches as you want, which can serve as read-only code repositories for anyone on the Internet without hassle.

The service layer part of Open QA is available at:

https://github.com/eucalyptus/open-qa

The data for Open QA is available at:

https://github.com/eucalyptus/open-qa-frontend-data

Upon launching an instance on AWS, a user can simply convert the raw Amazon Linux instance into the Open QA web-service by executing 4 commands below:

sudo yum -y install git
git clone git://github.com/eucalyptus/open-qa.git
cd ./open-qa/script/
./open_qa-installer.py -t amazon -e <your_email>

At this point, the AWS instance is running a HTTPD service with Open QA PHP code in its /var/www/html directory.

However, with only the service layer installed, the instance is missing the data, thus has no content to display.

Notice that the data for Open QA is available at a different GitHub repository. The user can supply the service with the actual Open QA data by following 5 commands below:

cd ~
git clone git://github.com/eucalyptus/open-qa-frontend-data.git
cd ./open-qa-frontend-data
sudo tar -zxvf ./cache_storage_for_open_qa.tar.gz
sudo cp -r ./cache_storage_for_open_qa/* /var/www/html/webcache/.

Now, the instance is completely converted to run the Open QA web-service with the up-to-date test results.

The data for Open QA is, for now, updated at every 12:01am — pushed to the public GIT repository from the QA server at Eucalyptus HQ. In order to keep the data in sync, the AWS instances are scheduled to periodically pull the latest data via “git pull” on the data directory and copy over the new data to /var/www/html/webcache directory, which can be set as a cron job via “sudo crontab -e” on the instances.

This proof-of-concept case with Open QA application is to demonstrate how GIT and AWS can be used to make an open-source web-service to be launched by anyone with the AWS account. The separation of the service layer and the data for the web-service allows the update to take place independently; for some other web-applications, it might be desirable that the user supplies his/her own customized data, thus only utilizing the service layer.

If anyone ever dreams of launching 10,000 instances of a web-application in the cloud, it must be done through the open source scenario above where the application pulls the needed service and data at its own pace and for its own purpose, which is to serve locally.

10 Steps to Euca Monkey

Euca Monkey is an easy-to-deploy test tool designed for performing stress-test on Eucalyptus Cloud. The tool repeatedly generates and tears down 6 types of cloud user resources: running instances, volumes, snapshots, security groups, keypairs, and IP addresses. As the resources are being populated and released, the cloud is actively queried to validate such resources are indeed being allocated correctly per request. Then, the tool renders the progress of the stress testing using Gnuplot, an open source graphing tool, and displays the graphs as webservice in real time.

Euca Monkey uses cloud-resource-populator — which utilizes Eutester, which is based on Boto, thus making the tool AWS-API-compatible  — to populate and release resources from Eucalyptus as a user. The input for cloud-resource-populator looks below:

[USER INFO]
account: ui-test-acct-23
user: user-23
password: mypassword23
[RESOURCES]
running instances: 2
volumes:  2
snapshots: 1
security groups: 10
keypairs: 3
ip addresses: 2
[ITERATIONS]
iterations: 200

With the given input above, cloud-resource-populator will generate the resources as specified in the [RESOURCES] section as the user ‘user-23′ under the account ‘ui-test-acct-23′. When viewed from Eucalyptus user console, it will look as below:

As soon as the resources are populated according to the specification, cloud-resource-populator will immediately send requests to the cloud to release all the allocated resources, which makes the console view look as below:

And, as you would have guessed, the process of populating and releasing of the resources is repeated for [ITERATIONS] times.

The most appealing feature of Euca Monkey is that it launches a webservice to render the progress of the stress-testing in real time.

The graph above is showing the input values of a few iterations of the resource population and tear-down process. When tracing the running instance line, which is in red, this graph is telling us that 20 instance were started on the first request, then those 20 instances were all terminated on the second mark, thus brought the count down to 0. And, 20 instances were started again, then terminated, and so on. Such operations were repeated 7 times on the graph above. And, also notice that there are 4 other resources being populated and released 7 times as well.

When the cloud is behaving nicely, the actual resources should be populated and released in match with the input values in the graph above, thus resulting in the output graph as below:

While the first graph shown above renders the input values to the tool cloud-resource-populator, this graph is showing the actual values reported by the cloud. The fact that these two graphs look the same means “Yay Cloud!!”

However, occasionally, during the development of Eucalyptus, you would see the graph like below on a rainy day in Santa Barbara:

The graph above reveals an interesting state of the cloud. Notice that the “running instance” line went from the “nice” behavior pattern to a flat line. It means that the cloud was able to launch and terminate 20 instances during the first phase, but somehow it got stuck to the state where it was not able to release instances, nor launch more instances, thus stuck with 18 running instances. But, notice that other resources were making the usual progress as before, except the security groups. It turned out what we were witnessing was the deadlock case in Eucalyptus Cluster Controller, which occurred around the 8th hour of the stress-testing. R.I.P CC. 😦

The purpose of stress-testing is to push the limit of the system to the point where malfunctions and faulty behaviors of the system can be observed. Such stress testing is crucial for the development of a distributed system like Eucalyptus; many unknowns and bugs are constantly introduced to the system as new features from various components are being integrated. Thus, having an easily deployable stress-testing tool with visualization support, such as Euca Monkey, yields tremendous benefits for the developers in an agile development environment since the tool aims to ensure the system’s stability and reliability throughout the rapid development cycle.

If you would like to take Euca Monkey for a spin, feel free to check out the GitHub link below. On a fresh Centos 6 machine or VM, it will take 10 simple steps to launch your own monkey.

https://github.com/eucalyptus/euca-monkey

Other Resources:

Eutester – https://github.com/eucalyptus/eutester

Boto – https://github.com/boto/boto

Eucalyptus QA – https://github.com/eucalyptus-qa

Open QA for Eucalyptus – https://github.com/eucalyptus/open-qa

OPEN QA for EUCALYPTUS

Open Source Software Project… What is it good for?

It’s free! It’s for the community! It’s the future of software! It’s better than closed! Everyone is doing it! It drives innovation! It’s the way of Steve Jobs (huh?)… And, it’s free!

I mean, seriously.

All I hear is nothing but the presuppositions on the greatness of open source — how it will greatly benefit you and your great organization.

But, can someone tell me how an open source project will benefit itself for being out in the open?

Yes, open source is free. Yes, you can download it and use it without paying a dime, which is fantastic by anyone’s standard.

Then, you will soon realize, “Wait, that sounds just too good to be true. There must be something they get in return for giving it away for free.”

Yes, it is absolutely true, and it comes down to this one crucial benefit:

QUALITY.

It turns out, among those freeloaders, there exist these rare kinds who defy intuition and want to give it back for some reason — there are those who want to share their abilities to fix things, the abilities to break things, the abilities to point out flaws, the abilities to compliment beauty, the abilities to talk smack, and the abilities to appreciate what others have done.

When the project is out in the open, feedback from these enthusiasts comes in various forms, in very chaotic ways, which is overwhelming at first. However, every single interaction with these folks contributes to one significant attribute of the open source project — it pushes the quality of the software.

Eucalyptus has been striving to achieve an incomparable goal, that is to be the most tested, thus the most stable, in turn, the most dependable cloud infrastructure in the open. It has been a long and tough road for us to march on through the ups-and-downs of tech-industry turmoil. However, the commitment toward this goal has been unwavering.

In part of the effort to become the most dependable open source cloud infrastructure, Eucalyptus is welcoming all community members to participate in the quality assurance process of Eucalyptus development.

OPEN QA for Eucalyptus Wiki Page:

https://github.com/eucalyptus/open-qa/wiki

OPEN QA Website (Beta Version):

http://ec2-50-112-61-121.us-west-2.compute.amazonaws.com/open_qa.php

Every community member is invited to check out the Open QA website on a daily basis and provide feedback and criticism on the development process of Eucalyptus.

You will find Eucalyptus developers engaging in active conversations on the IRC channels: #eucalyptus, #eucalyptus-devel, and #eucalyptus-qa on irc.freenode.net. Or, you may also prefer posting your thoughts on a forum via: https://engage.eucalyptus.com Or, please feel free to directly file bugs on Eucalyptus via JIRA at: https://eucalyptus.atlassian.net/browse/EUCA

But, in whichever way you decide to engage with us, please never hesitate to:

Live long and prosper \\//

— Open QA for Eucalyptus —

Pigeons on a Euca: Eucalyptus Cloud Monitoring Mobile App via Twitter

Being a system administrator is the easiest job in the building when the system is working; no one questions your presence nor existence. Your tasks are highly under-appreciated during the time of peace, yet you do not mind for such vanity since you’d rather be reading blogs and watching youTubes in serenity. Every once in a while, an idiot cracks an ancient joke, “hey, aren’t you supposed be working?”. But, I’m working, you imbecile employ-of-the-month.

However, the curse begins once you step out of the building, entering the realm of unknowns, far-disconnected from the comfort of your Macbook and Wi-Fi. While staring at endless tail-lights on a freeway, having a long walk on the beach, or being queued behind 7 shopping carts at a grocery market, your mind begins to wonder, “how’s my system doing?”

Since the day 1, you have set up numerous layers of email-notification alarms, but it’s never enough; “getting the e-mails” only means “it’s too late”. Always there is an urge of logging in. But you can’t. You are cut off. You are trapped; the lady in front of you just pulled out a checkbook while the sign clearly says “credit or cash only.” You begin to panic. You compulsively refresh emails on your smartphone, but no answers. Silence is deafening. No news is never the good news. The only exit is when the system whispers in your ear: “Have no worries. Everything is working… For now.”

Now, your concerning days are over. In the midst of the 3G wilderness, the application Pigeons on a Euca will deliver you peace and tranquilly that are comparable to those of a laptop on VPN. Of course, that is if you are equipped with a smartphone at all times and the system is running Eucalyptus Cloud.

The trick is to run a periodic Cloud monitoring app via Twitter.

Instead of being passively notified by emails when there arise problems in the system, you can set up the application Pigeons on a Euca that runs a small script that actively “tweets” the status of the cloud for you and your co-sys.admins to follow.

Screenshot of Pigeons on a Euca on iPhone

Here are the requirements:

  • Have a twitter account opened for this application.
  • Have a machine, or a virtual machine, running Linux with network capability.
  • Have the cloud admin’s credentials.
  • Have a smartphone with Twitter Client App installed.

Current (Beta) Features**:

  • In every 1 min, it tweets the status-change on running instances*.
  • In every 10 min, it tweets the number of currently running instances in the cloud*.
  • In every 10 min, it tweets the number of newly-launched instances in the cloud*.
  • In every 10 min, it tweets each availability zone information

* These features rely on the new version of euca2ools (v 2.0)

** The application is highly configurable so that more reporting can easily be added when needed.

Instructions on How to Set up the Application Pigeons on a Euca

First, you need to open a new twitter account.

1. Go to twitter.com and sign up for a new account; if you already have an account with Twitter, you are going to need a new email address to create a new twitter account for this application.

2. As soon as the new twitter account is open, check the “Protect my Tweets” box on the Tweet Privacy section so that your tweets do not accidently get broadcast to the public channel.

3. Apply for a developer account at dev.twitter.com.

4. After opening the twitter developer account, you need to create an application at dev.twitter.com.

5. Fill out the application details form. There is no requirement on what you put in the name and description boxes. For the website box, you may specify any working web URL of your choice; it won’t matter for this application.

6. After creating the application, on the “settings” page, change the access level to be “Read and Write

7. Click the button “Change this Twitter application’s settings” at the bottom of the page to apply the change. It might take a few minutes for the change to be applied.

8. On the “Details” page, verify that the access level is changed to “Read and write“. After seeing that the change has taken place, click on the button “Create my access token” at the bottom of the page.

9. Go to the page “OAuth tool” and verify the consumer key and access token are generated. You will need these keys to configure the application Pigeons on a Euca later.

10. At this point, your twitter account is configured to receive script-generated tweets from the application Pigeons on a Euca.

Second, after the twitter account is ready, you need to set up the machine where the application Pigeons on a Euca will be running on.

1. Install a perl module “Net::Twitter::Lite” on your Linux box.

You may install the module from source by visting the link:

http://search.cpan.org/~mmims/Net-Twitter-Lite-0.10004/lib/Net/Twitter/Lite.pm

Or, for UBUNTU distributions, such as Lucid, you may simply add the line:

deb http://ubuntu.mirror.cambrium.nl/ubuntu/ lucid main universe

to “/etc/apt/source.list”.

Then, install the perl module twitter-lite-perl by using the commands:

apt-get update
apt-get install libnet-twitter-lite-perl

2. Install the latest version of “euca2ools” (v 2.0)

Visit the website (http://open.eucalyptus.com/downloads) for detailed instructions on how to install the latest euca2ools.

For UBUNTU distributions, such as Lucid, you add the line:

deb http://downloads.eucalyptus.com/software/euca2ools/2.0/ubuntu lucid universe

to “/etc/apt/source.list”

Then, install euca2ools by using the commands:

apt-get update
apt-get install euca2ools

Third, after all the necessary modules and tools are installed on the machine, now you can finish setting up the application on the machine.

1. Download the tarball pigeons_on_a_euca.tar.gz from the project repository:

https://projects.eucalyptus.com/redmine/projects/pigeons-on-a-euca/files

or

https://github.com/eucalyptus/pigeons_on_a_euca

2. Untar the tarball at a directory of your choice:

tar zxvf pigeons_on_a_euca.tar.gz

3. On the “my applications” page on dev.twitter.com, copy the lines in the “OAuth tool” section, as shown in the Step 9. of the first instruction set.

4. Change the lines in the file “./pigeon_on_a_euca/pigeon_cage/key/o_auth_settings.key” with your account’s actual values.

5. Perform a quick check to validate the setups so far by running the commands:

cd ./pigeon_on_a_euca/pigeon_cage

perl ./tweet_it_away.pl ./tweets/mytest.tweet

6. Check the twitter account to verify that the line “this is a test” was tweeted. Also notice the lock sign on the tweet that indicates the security level is private.

7. After verifying the test tweet, go to the directory “./pigeons_on_a_euca/credentials” and store your Eucalyptus cloud’s admin credentials.

8. Verify that you can talk to your Eucalyptus cloud via the admin credentials by running the commands:

cd ./pigeons_on_a_euca/credentials

source eucarc

euca-describe-availability-zones verbose

9. At this point, the application is all set to run. Do a quick check by running the main script:

perl ./activate_the_pigeons.pl

10. Check the twitter account to verify that the status of instances running on the cloud are being tweeted.

11. To run the main script in the background, do:

nohup perl ./activate_the_pigeons.pl > ./stdout.log >> ./stderr.log &

12. To monitor the run:

tail -f stdout.log

Last, install any Twitter Client App on your smartphone and follow the account that you created above.

Now you have an mobile application that keeps you updated with the status of the cloud.

Warning: The amount of tweets generated by the application might be overwhelming; at its maximum rate, it will upload 350 tweets per hour. It is recommended that you and your co-workers open a separate twitter account exclusively for receiving tweets from this application.

And, if you decide to modify the script, please be aware of the hourly limit of the tweet updates, which is set to be 350 tweets per hour. Carefully limit your tweets so that the application maintains consistent tweet-ability.

Thank you for your interest in the application, and feel free to contribute and share.

Kyo

%d bloggers like this: