Kyo Lee

Open-Source Cloud Blog

Category: software development

Docker Strange Dev: How I learned to stop worrying and love the VM

A couple of months ago, my laptop died. I remember the day I brought it home for the first time. Still in college. I made a big decision—despite of my marginal finances—to go with the top model. “It should last at least five years,” I convinced myself. Seven years later, I got the news from the certified Apple repair person, “Well, you should just buy a new laptop than trying to fix this one.” So I did.

Now, I have a new laptop, which is currently placed on a pedestal. Watching how fast applications open and close on this laptop makes me cry. The future has arrived. I tell myself, “No! I will not f*** this one up this time! I will never install any software on this laptop ever!” Of course, I am in denial. This pure awesomeness will eventually decay. I predict the rate of decay exponentially accelerated by the number of software installed on the laptop (Disclaimer: No supporting data exists for this assertion—other than my paranoia).

I begin searching for the answer to the quest: Install no software. So I try Docker.

Docker_logo

The noticeable difference between using Docker, which runs Linux container, and using the virtual machine (VM) is speed.

Both options provide the ability to isolate software’s running environment from the base operating system. Thus both deliver the desirable paradigm: Build once and run everywhere. And both address my concern—that the base operation system must remain minimal and untouched. However, Docker stands out from the classic VM approach by allowing the cloud application—which is a virtual image with desired software installed—to run within milliseconds. Compare that to the minutes of time it takes to boot a VM instance.

Speed_Racer

What significance does this difference make?

Docker removes a big chunk of mental roadblock for developers. Thanks to Docker’s superior response time, developers can barely identify the perceptual distinction between running applications on a virtual image (a virtual container, to be precise) and running applications on the base OS. In addition to its responsiveness, Docker appeals to the developers by obsoleting the tedious procedures of booting the VM instance and managing the instance’s life cycle. Unlike previous VM-centric approach, Docker embraces the application-centric design principal. Docker’s command line interface (CLI) below asks us two simple questions:

1. Which image to use?

2. What command to run?

docker run [options] <image> <command>

With Docker, I can type single-line commands to run cloud applications, similar to other Linux commands. Then I will have my databases servers running. I will have my web servers running. I will have my API servers running. And I will have my file servers running. All these are done by using single-line commands. The entire web-stack can now be running on my laptop within seconds. The best part of all this? When I’m done, there will be no trace left on my laptop—as if they never existed. The pure awesomeness prevails.

docker-whaleeuca_new_logo

Running Eucalyptus Console on Docker

 

For those who want to check out Eucalyptus Console to access Amazon Web Services, here are the steps to launch Eucalyptus Console using Docker on OS X.

Step 1. Install Docker on your laptop

Here is a great link that walks you through how to install Docker on OS X:

http://docs.docker.com/installation/mac/

Step 2. Pull Eucalyptus Console Docker image repository

Run the command below to pull Eucalyptus Console Docker images (it will take some time to download about 1.5 G image files):

docker pull kyolee310/eucaconsole

Run the command below to verify that the eucaconsole images have been pulled:

docker images

Screen Shot 2014-09-14 at 10.52.11 PMFor those who want to build the images from scratch, here is the link to the Dockerfile used:

https://github.com/eucalyptus/dockereuca

Step 3.  Update Docker VM’s clock

When running Docker on OS X, make sure that OS X’s clock is synchronized properly. A skewed clock can cause problems for some applications on Docker. In order to fix this issue, you will need to log into the Docker VM and synchronize the clock manually.

You can SSH into Docker VM using the command below:

boot2docker ssh

Once logged in, run the command below to sync the clock:

sudo ntpclient -s -h pool.ntp.org

Run the command below to verify that the clock has been sync’ed

date

One more patching work to do is to create an empty “/etc/localtime” file so that you can link your OS X’s localtime file to Docker VM’s localtime file at runtime:

sudo touch /etc/localtime

Exit the SSH session:

exit

This issue is being tracked here:

https://github.com/boot2docker/boot2docker/issues/476

Screen Shot 2014-09-14 at 10.53.04 PM

Step 4. Launch Eucalyptus Console via Docker

Run the command below to launch Eucalyptus Console on Docker:

docker run -i -t -v /etc/localtime:/etc/localtime:ro -p 8888:8888 kyolee310/eucaconsole:package-4.0 bash

It’s a shame, but running a live “bash” session is not Docker’s way of doing things, but excuse me for the moment until I figure out how to run Eucalyptus Console properly without using the “service” command.

The command above will open a bash shell session for the eucaconsole image, then run the command below to launch Eucalyptus Console:

service eucaconsole start

Step 5.  Open Eucalyptus Console on a browser

Run the command below to find out the IP of Docker VM:

boot2docker ip

, whose output would look like:

    The VM’s Host only interface IP address is: 192.168.59.103

Using the IP above, access Eucalyptus Console at port 8888:

ex. http://192.168.59.103:8888/

Screen Shot 2014-09-14 at 8.54.09 PM

In order to access AWS, you will need to obtain your AWS access key and secret key. Here is the link by AWS on howto:

http://docs.aws.amazon.com/AWSSimpleQueueService/latest/SQSGettingStartedGuide/AWSCredentials.html

Step 6. Log into AWS using Eucalyptus Console

Screen Shot 2014-09-14 at 8.54.28 PM

A Developer’s Story on How Eucalyptus Saved the Day

This is a short story on how a UI developer at Eucalyptus was able to use Eucalyptus to save his time on development of Eucalyptus.

travis-mascot-200px

The agenda of the day is to set up Travis CI for the latest Eucalyptus user console, Koala. Quoted from Wikipedia, “Travis CI is a hosted, distributed continuous integration service used to build and test projects hosted at GitHub.” In other words, we want to set up an automated service hook on Koala’s GitHub repository so that whenever developers commit new code, “auto-magic” takes places somewhere in the Internet, which ensures that the developers did not screw things up by mistake, which, in turn, allows us developers to sit back and enjoy a warm cup of “post-commit-victory” tea while our thoughts are drifting away on the sea of Reddit.

But, before arriving at such Utopia, first things first. Must read the instructions on Travis CI.

Luckily, Travis CI put together a nice and comforting set of documentations on how to hook a project on Travis CI (http://about.travis-ci.org/docs/user/getting-started/) as well as its impressive, pain-free registration interface. Things are as easy as clicking buttons for the first few steps so far.

And, of course, nothing ever comes that easy. Now I am looking at the part where I need to create the XML configuration file for Koala’s build procedures. Done with the button-clicking. Time to put down the cup of tea because now I got some reading and thinking to do.

A few minutes after, I am stunned by the line below:

“Travis CI virtual machines are based on Ubuntu 12.04 LTS Server Edition 64 bit.”

Oh, bummer.

The main development platform for Koala has been on CentOS 6, meaning Koala’s build dependencies and scripts have been targeted toward running on CentOS 6 environment. It means that I need to go over the build procedures and dependency setting so that Koala can be built and tested on Ubuntu 12.04. But, first, where do I find those Ubuntu machines?

Then, the realization, ‘Wait a second here. I have Eucalyptus.’

I open up a browser and log into the Eucalyptus system for which I have been using as backend to develop Koala. I launch a couple of Ubuntu 12.04 instances. Within a minute, I have 2 fresh instances of Ubuntu 12.04 virtual machines up and running.

Immediately I log in to the first instance and start installing Koala to validate the build procedures on Ubuntu 12.04 environment. Along the way, I discover various little issues in this new environment and tweak things around to fine-tune Koala’s build procedures. Once felt ready, I log into the other Ubuntu instance to verify the newly adjusted build procedures under its fresh setting. More mistakes and issues are captured, and more adjustments are made. Meanwhile, the first instance has been shut down and a new Ubuntu instance has been brought up. With this new instance, I am able to rinse and repeat the validation of the build procedures. Of course, there are some mistakes again. They get fixed and adjusted. Meanwhile, another instance goes down and comes up fresh.

The juggling of the instances lasts a couple more times until the build procedures are perfected. Now I am confident that Koala will build successfully on Ubuntu 12.04 environment. Commit the new build XML script to GitHub. It’s time for the warm cup of “post-commit-victory” tea.

images-tea

[Cloud Application] Run Eucalyptus UI Tester on your Mac using Vagrant

Eucalyptus User Console

Initially, this blog was written to be a technical blog that describes the instructions on how to run Eucalyptus UI Tester (se34euca) on your Mac using Vagrant and Virtual Box. However, writing this blog has made me reiterate the benefits on running or developing applications on a virtual machine.

Background: Automated Tester As An Application (ATAAA)

When developing software, there is a need for having an automated test suite readily available; with the click of a button, a developer should be able to run a sequence of automated tests to perform a speedy sanity check on the code that is being worked on.

Traditionally, a couple of in-house machines would be dedicated to serve as automated testers, shared by all developers. In such setup, it would require the developers to interact with the tester machines over VPN, which can get quite hectic sometimes — especially for those developers who like to hang out at a local coffee shop.

Now, with Vagrant and Virtual Box, you can have your own personal automated tester as a “cloud application” running on a laptop. In this scenario, when the code is ready for testing, you can quickly run a set of automated tests on your laptop by launching a virtual image that has been pre-configured to be the automated tester for the project/software. When finished, the virtual instance can be killed immediately to free up the resources on the laptop.

Screen shot 2013-07-09 at 9.46.04 PM

Benefits of Running Applications on a Virtual Instance

As mentioned in the introduction, while preparing this Eucalyptus UI Tester to run as a cloud application, I rediscovered the appreciation for using virtual machines as part of the software development environment. The fact that the application runs on a virtual image brings the following benefits: contain-ability, snapshot-ability, and portability of the application.

1. Contain-ability

Running the application on a virtual instance means that no matter how messy dependencies the application requires, they all get to be installed on a contained virtual environment. This means that you get to keep your precious laptop clean and tidy, protecting it from all those unwanted unstable, experimental packages.

2. Snapshot-ability

When working with a virtual instance, at some point, you should be able to stabilize the application, polish it up to be a known state, and take a snapshot of the virtual image in order to freeze up the moment. Once the snapshot is taken and preserved, you have the ability to bring the application back to the such known state at any time. It’s just like having a time machine.

groundhogday

3. Portability

When working with a team or a community, the portability of the application on a virtual image might be the most appealing benefit of all. Once you polish up the application to run nicely on a virtual image, then the promise is that it will also run smoothly on any other virtual machines out there — including on your fellow developers’ laptops as well as on the massive server farms in a data center, or in the cloud somewhere. Truly your application becomes “write once, run everywhere.”

Screen shot 2013-07-09 at 9.47.51 PM

Running Eucalyptus UI Tester on Your Mac Laptop via Vagrant

If you would like to run Eucalyptus UI Tester from scratch, follow the steps below:

1. Installing Vagrant and Virtual Box on Mac OS X in 5 Steps

and

2. Installing Eucalyptus UI Tester on CentOS 6 image via Vagrant

If you would like to run Eucalyptus UI Tester from the pre-baked Vagrant image, follow the steps below:

1. Installing Vagrant and Virtual Box on Mac OS X in 5 Steps

then

3. Running PreBaked Eucalyptus UI Tester Image using Vagrant

, and see 4. Creating a New Vagrant Package Image if you are interested in creating a new image via Vagrant.

Instructions

1. Installing Vagrant and Virtual Box on Mac OS X in 5 Steps

https://github.com/eucalyptus/se34euca/wiki/Installing-Virtual-Box-and-Vagrant-on-Mac-OS-X

2. Installing Eucalyptus UI Tester on CentOS 6 image via Vagrant

https://github.com/eucalyptus/se34euca/wiki/Installing-se34euca-on-Centos-6

3. Running PreBaked Eucalyptus UI Tester Image using Vagrant

https://github.com/eucalyptus/se34euca/wiki/Running-PreBaked-se34euca-Image-using-Vagrant

4. Creating a New Vagrant Package Image

https://github.com/eucalyptus/se34euca/wiki/Creating-a-New-Vagrant-Package-Image

euca_new_logo

DevOps Culture — Fail Fast on Eucalyptus

sandiegozoo
At a meetup event down in San Diego, California, Eucalyptus had a chance to meet Sander van Zoest (@svanzoest), the VP of technology at OneHealth (http://www.onehealth.com/), who is also the organizer of the San Diego DevOps group (http://www.meetup.com/sddevops/). Sander and his team at OneHealth have been using Eucalyptus cloud for some time. Asked why OneHealth runs Eucalyptus in-house, Sander had some interesting stories to say about dealing with health-related data and the company’s DevOps engineering culture.

onehealth

Due to the strict regulations on Protected Health Information (PHI), OneHealth needs to take extra strong measures if they are to provide the services on AWS; Sander spent a good amount of time explaining to us how demanding it is to satisfy the regulations. Such barriers make things complicated to push any personal identifiable health information to the cloud.

For the AWS case, the very specific barrier was that AWS provides no legal protection when storing sensitive data in the cloud storage space. For instance, it is required by HIPAA and HITECH regulations that One Health needs to be able to promise a 72-hour response time to inform their customers about the breach of the data, should it ever happen, and provide an ETA to identify and patch the security hole that caused the breach.

Sander points out that at the moment, AWS does not guarantee such protections/services. For this reason, OneHealth’s production environment is deployed at Rackspace’s co-location since it provides HIPAA Business Associate Addendums. However, it is noted that given the evolving nature of the public cloud, it is very “cloudy” to predict how things are going to change in the near future. The recent announcement by AWS on CloudHSM (http://aws.typepad.com/aws/2013/03/aws-cloud-hsm-secure-key-storage-and-cryptographic-operations.html) — although it doesn’t cover the legal protection — is a good indicator showing AWS’s interest in providing secure storage service as moving forward.

cloudy2

What this uncertain, “cloudy” future means for engineering at OneHealth is employing a variety of infrastructure environments to take advantage of each platform while staying flexible. It becomes essential to design OneHealth’s services and applications to be deployable on bare-metal systems at Rackspace (production environment), AWS (sandbox/staging environment), Eucalyptus (in-house continuous integration and testing environment), and engineers’ laptops using Vagrant (development and testing environment).  (http://www.vagrantup.com/).

Under such heterogeneous systems, from its production down to the engineer’s laptop, the development environment — the OS, dependencies, configurations, etc — needs to be kept uniform via virtualization and automation, allowing seamless pushing of new code from the laptop up to the production. For handling the life cycle of machines and VM instances, the engineers at OneHealth are big fans of Chef (http://www.opscode.com/chef/), which makes the configuration management portable on any infrastructure platforms. For virtual machines, the instance images are prepared via debian preseed files while leveraging a open source tool VeeWee (https://github.com/jedi4ever/veewee).

chefchef_icon

At OneHealth, the philosophy of DevOps is deeply embedded in every aspect of its development and operation. The concept of DevOps was not new to many engineers who brought in the ideas of “Infrastructure as Code” and “Commit Often and Fail Fast” from previous companies such as MP3.com and Joost.

Speaking of DevOps culture, one fun fact Sander mentioned — which goes against intuition for many traditional IT shops — was that the operation team at OneHealth likes to take down the instances and rebuild them regularly. The recycling of the instances ensures the “freshness” of the deployed services and applications. The operation engineers should be more concerned if an instance’s uptime was longer than, say, 30 days because it meant that the content of the instance was outdated, possibly containing unfixed bugs or security issues. If the deployment setup was doing what it was supposed to be doing, then it should have killed the outdated instance and brought up a new instance with the latest updates.

The same goes for the development environment. It would be much better to refresh the dev environment instances with frequent relaunching and reconstructing than having the developers working on a stale dev environment, which turns out to be more harmful for the development. Plus, this destroy-and-rebuild enforcement encourages the developers to consistently check in the code to a version-controlled code repository, allowing early detection of conflicts in code.

All of these procedures, bringing together datacenter automation and configuration management, are part of a very new movement in software development now labeled as “DevOps”. The DevOps folks often joke around and say even a few years ago, the terminology didn’t even exist, but now, DevOps has become the most sought-after practice in IT. All thanks to the wide spread of cloud computing, giving birth to the programmable infrastructure.

euca_new_logo

Beyond Continuous Integration: Locking Steps with Dev, QA, and Release

Continuous integration: the practice of frequently integrating one’s new or changed code with the existing code repository [wikipedia]

In this blog we will talk about how the continuous integration process was put in place for the new component, Eucalyptus User Console, in order to collaborate the efforts among the dev, QA and release teams throughout the development cycle of Eucalyptus 3.2.

Backgrounduserconsoleconponentview

Eucalyptus User Console is a newly introduced component in Eucalyptus, whose main goal is to provide an easy-to-use, intuitive browser-based interface to the cloud users, thus assisting in the dev/test cloud deployments among IT organizations and enterprises. Eucalyptus User Console consists of two components: javascript-based client-side application and Tornado-based user console proxy server.

Early Involvement

The first phase of the development was to come up with a quick prototype to demonstrate how the user console would work under the given initial design of the architecture (see the Eucalyptus Console components layout diagram above). As soon as the prototype was evaluated and its feasibility was verified, the release team started creating the packages for two major Linux OS platforms: Ubuntu and Centos/RHEL.

The early involvement of the release team turned out to be the best help any developers or QA engineers could ask for; since the very beginning stage of the development, the release team was able to provide invaluable information that served as guardrail for the fast-moving development. Such information included advising on how the files should be named and organized and identifying which dependencies should or should not be used in order to meet the requirements for various Linux distributions. Dealing with such issues at the later stage of the development would have been undoubtedly a major pain in the back-end.

jenkins_logo

Further more, the release team was able to ensure that the development of the new user console would never go off the track against the Linux distro requirements by setting up the automated daily package-building process using Jenkins — which utilizes the VM resources from our Release cloud that runs on Eucalyptus.

Keeping Up With Eucalyptus

Setting up the automated process to build the packages would allow the release team to keep an eye on the progress of the user console’s development in terms of the ability to build the packages according to the constraints set by the Linux distributions. However, it would not guarantee whether the newly built packages contain the version of the user console that works with the current, up-to-date Eucalyptus cloud that was also in development.

Thus, the challenge was to ensure that the latest built user console packages work with the latest built Eucalyptus throughout the development.

In order to solve this issue, the QA team created a testunit that automatically installs the latest user console packages on a newly built Eucalyptus. Then, the testunit was added to the main test sequences used by the Eucalyptus 3.2 development in our automated QA system, making the installation of the latest user console packages accessible by all developers at Eucalyptus.

This setup encouraged a failure in the user console package installation to be seen by any developers throughout the development, thus allowing the failure to be detected fast and reported with quickness.

Screen shot 2012-12-10 at 5.50.02 AM

The testunit ui_setup can be seen in action above in the table which displays the results of the test sequence ran by the automated QA system.. Check out the link below for more details of this testunit:

https://github.com/eucalyptus-qa/ui_setup

Circle of Trust

As the user console evolved out of its prototype state and took the form of a more product-like shape, the QA team was working in parallel, figuring out how to set up the automated testing process for the user console. The blog here talks in detail about how Selenium was used to create the automated web-browser testing tools, se34euca.

big-logo

In the mid-stage of the development, as the features of the user console started functioning in reasonably stable manners, 3 automated tests were added — incrementally — to ensure that the working state of the user console throughout the development.

Screen shot 2012-12-10 at 6.41.28 AMThose 3 tests are:

  1. user_console_view_page_testhttps://github.com/eucalyptus-qa/user_console_view_page_test
  2. user_console_generate_keypair_testhttps://github.com/eucalyptus-qa/user_console_generate_keypair_test
  3. user_console_launch_instance_test https://github.com/eucalyptus-qa/user_console_launch_instance_test

These automated tests were to ask the 3 simple questions below on a daily basis:

  1. Can the user log in and see all the landing pages on the latest user console?
  2. Can the user generate a new keypair using the latest user console?
  3. Can the user launch a VM instance using the latest user console?

Of course, it would be possible, and desirable, to ask more questions in a more complicated fashion. However, during the rapid development phase, asking those 3 simple questions on a daily basis, turned out to be sufficient, and effective, to understand whether something terrible had happened to the user console or not.

traffic_light

The goal of these automated tests at this stage of the development was not to detect every little defect in the product. Not too soon at the moment.

The main purpose is rather to serve as an indicator for the developers, QA engineers, and release engineers to assure ourselves that the change that went in the code earlier today did not ruin the delicate trust among the three groups, meaning that the build, installation, and configuration procedures are still in tact. Having such assurance in check by mechanical means has made the three groups extremely effective in discovering issues during the development since it allowed each member to narrow down exactly what was responsible for the defects in a finely reduced time frame, which was in hours, rather than days or weeks.

Guardrail For Development

Having the automated package build process and the automated installation/configuration process in place at the early stage of the development was proven to be extremely useful; rather than agreeing on the written procedures, the dev, QA, and release team materialized such agreements into the actual implementation, and put them into work by using various automated mechanics that run on a daily basis. Therefore, throughout the development, we were able to witness and assure ourselves that we were making progress in accordance with the plan and our self-imposed restrictions.

Check out the Eucalyptus Open QA webpage to see the continuous integration at Eucalyptus in action:

Eucalyptus Open QA (beta) – http://ec2-50-112-61-121.us-west-2.compute.amazonaws.com/open_qa.php

TCP Dumpster: Monitoring TCP for Eucalyptus User Console

This is the part III of the Eucalyptus Open QA blog series that cover various topics on the quality assurance process for Eucalyptus’s new user console.Eucalyptus User Console

On this blog, we would like to share the information on how we monitors the traffic on the user console proxy, using the Linux command ‘tcpdump‘ and its rendering application ‘tcpdumpster‘, to derive and understand the behaviors of users when interacting with the user console.

Background

Eucalyptus user console consists of two components: javascript-based client application and Tornado-based user console proxy. When logged in, the client-side application, which runs on a user’s web-browser, polls the user’s cloud resource data at a certain interval, and the user console proxy, located in between the cloud and the users, relays the requests originated from the client applications.

userconsoleconponentview

Recalling from the first blog of the series, our challenging question was, when 100+ users are logged into the Eucalyptus user consoles at the same time, would the user console proxy be able to withstand the traffic that was generated by those 100+ users? Plus, how do we ensure the user experience under such heavy load?

The answer to the questions above was provided in details here.

The short answer is to generate 100+ user traffic using the automated open-source web-browser testing tool, Selenium, while manually evaluating the user experience on the user console.

However, prior to answering the questions above, first we needed to establish a way to quickly, yet effectively monitor the traffic between the clients and the proxy in order to make observations on the patterns and behaviors of the traffic.

TCP Dump

tcpdump‘ is a standard tool for monitoring the TCP traffic on Linux. For instance, if the user console proxy was running on the port 8888 on the machine 192.168.51.6, monitoring the traffic on the port 8888 can be as simple as running the command below on the Linux terminal at 192.168.51.6:

tcpdump port 8888

Then, this command will “dump” out the information on every packet that crosses the port 8888 on the machine 192.168.51.6. However, the information generated by this command is just too overwhelming; such information would fly by on the terminal screen as soon as the user consoles start interacting with the proxy. There had to be a better way to render the output of the command ‘tcpdump‘.

 TCP Dumpster

At Eucalyptus, using the automated QA system, a new, up-to-date Eucalyptus system is constantly installed and torn down within a day or two life span (check out here to see the Eucalyptus QA system in action). For this reason, we needed to come up with a quick way to set up the monitoring application on the machine where the proxy was installed. Plus, we would like to have all necessary monitoring information displayed on a single HTML page for a quick glance, thus making it easier for the observer to apply intuition on understanding the big picture. As a result, ‘tcpdumpster‘ was born.

Picture 96

The application ‘tcpdumpster‘ runs on the same machine where the proxy is installed. It runs the Linux command “tcpdump port 8888” and parses its output into a list file. This list tracks 8 attributes of the TCP traffic:

  • Unique connections, based on IP
  • Unique connections, based on Port
  • Connection count, per second
  • Connection count, averaged over a minute
  • Connection count, in total
  • Packet length, per second
  • Packet length, averaged over a minute
  • Packet length, in total

With those 8 attributes displayed on a single HTML page, which can be accessed via:

http://192.168.51.6/tcpdumpster.php

, we were able to make some interesting observations on the behaviors of the traffic as the user console starts interacting with the proxy.

TCP Dumpster Examples

The graph below is showing the traffic pattern for 7 minutes, generated by a user logged in to the user console.

Picture 18

Notice the first peak that represents the log-in of the user, followed by the periodic peaks that show the polling of the cloud resource data, and user actions can be seen in the blobs among the peaks.

The graph below is showing the traffic pattern as more selenium-based automated scripts are activated to simulate a large amount of users.

Picture 46

The first block shows when 1 and 2 Selenium scripts are active, and the second block shows when 6 and 12 Selenium scripts are active (check out here to learn how Selenium was used). When graphed for averaged over a minute, the differences between the stages become more visible:Picture 47

When graphed all together, along with the connection data, they look below:

Picture 45

tcpdumpster‘ turns out to be very useful when validating if a newly written selenium script is behaving correctly. The graph below shows the selenium script that launches a new instance, waits until the instance is running, then terminates the instance, waits a few minutes, and repeats:

Picture 81

And, of course, ‘tcpdumpster‘ is very handy when you are running a longterm test; it allows me to set up the test, go to sleep, and wait up the next day to check out the results. The graph below shows how the proxy was able to withstand the constant ‘refresh’ operations from multiple connections for longer than 5 hours:

Picture 94

Now, can you guess what is going on in the graph below?

Picture 105

Check out the GitHub link below and try out ‘tcpdumpster‘ on your own Eucalyptus user console proxy to find out for yourself:

https://github.com/eucalyptus/tcpdumpster

Cloud App. Design: Create a Flexible Automated Web-UI Testing Tool using Selenium

In this article, I will go over the technical details on how in Eucalyptus, we used Selenium to simulate a large number of cloud user workload in order to ensure the quality of user experience in the new Eucalyptus user console.

As covered in my previous blog, Eucalyptus is coming out with a new user console, that is browser-based and intuitive to use, thus playing a key role in promoting the cloud adoption among IT organizations and enterprises.  But, the challenge was to ensure that this brand-new user console would be ready for handling the real-world workload when released out in the wild. The answer to this challenge was to simulate the activities of 150 cloud users using Selenium, an open source tool for automating web application testing.

How to Automate an Online User

The first step was to download Selenium IDE for Firefox. Selenium IDE is a must-to-have GUI tool for automating clicks and input-submits on a web application. After installing Selenium IDE on your computer, you can start Selenium IDE from Firefox’s Tools menu:

When started, Selenium IDE opens up its own separate window:

Notice the red dot on the top-right corner of Selenium IDE. When clicked, Selenium IDE will start recording all the activities you perform on the web-browser — every link you click and every input you type on the browser will be recorded as command-lines on Selenium IDE as seen below:

What Selenium IDE allows you to do is to replay the recorded activities, such as clicking and typing, on the browser in the exact same order that they were performed.

TIP.

But, soon you will notice that when replayed on Selenium IDE, it tends to fly through all the clicks in lighting speed so that the replayed activities often result in failures — the browser and web application cannot keep up with the speed of the clicks performed by Selenium IDE.

In order to prevent such cases, you will need to manually step through the record activities and insert various “pause-and-check” points using ‘waitForElementPresent’ command. For instance, when there is a command ‘click link=Delete’, I would put ‘waitForElementPresent link=Delete’ command prior to the click command to ensure that the page will fully loaded and the link ‘Delete’ is indeed present on the page before allowing Selenium IDE to execute the command ‘click link=Delete’. Later I learned that for every ‘click’ command, it is always a good habit to throw in the ‘waitForElementPresent’ command.

After verified that the recorded action is repeatable via Selenium IDE at its full speed, the next step is to export the action into a Selenium Python WebDriver format:

The result of the export above is a script file that describes the recorded Selenium action in a Python’s unittest format:

Once have the script exported, you can run the recorded action on a remote Selenium server without having to open up a web-browser. In other words, now you can simulate an online user doing the exact same recorded action on a web-browser by simply running the Python script generated by Selenium IDE.

Remote Selenium Server Configuration

Before running the script, you will need to configure a machine to run a remote Selenium server, which will behave like a web-client. The steps are on a Ubuntu machine, you will execute the following commands:

sudo apt-get -y update
sudo apt-get -y install default-jre
sudo apt-get -y install xvfb
sudo apt-get -y install firefox
sudo apt-get -y install python-pip
pip install selenium
Xvfb :0 -ac 2> /dev/null &
nohup java -jar selenium-server-standalone-2.25.0.jar &
export DISPLAY=:0


After running the commands above, you will have a Ubuntu machine capable of running the exported Python Selenium script, which then, simulates an online user opening up a Firefox browser and performing the recorded clicks.

Creating a Flexible, Reusable Testing Tool

Now, your task is to produce many exported Python Selenium scripts for all activities on the web application that will be used as building blocks for creating different user behaviors and workflows.

The first collection of Python Selenium scripts I produced was to visit every single landing page on the Eucalyptus user console. The second collection of Python Selenium scripts was to create cloud resources under the default setting. Having those two sets of Python Selenium scripts allowed me to construct complicated user interactions on the web application. For instance, with a bit of shuffling of the scripts, I could build up a user scenario where the online user would visit the keypair page, create a new keypair, visit the dashboard page, visit the security group page, create a new security group, revisit the dashboard page, and so on.

The next task was to consolidate all the scripts into one library file, getting rid of static values in the variables and breaking down the actions in the scripts into functions. Having a such unified library enables test-writers to stitch and arrange these functions together to construct whole new user scenarios as needed.

When examining the Eucalyptus user console test framework se34euca, you will see that the main library file ‘lib_euca_ui_test.py’ contains the functions that are exported from Selenium IDE, where each function describes a very specific action to perform on the web console. The files ‘testcase_*.py’ list the arrangements of those functions to form simple, or complex user behaviors. Finally, the files ‘runtest_*.py’ are the executables of those test cases that take input of the target web console environment.

Cloud Application

Now that you have a way to convert a Ubuntu machine into a Selenium server and have the Selenium test framework checked into a GitHub repository, you have a way to launch the Selenium test as a cloud instance — using se34euca as an example, the steps are:

Step 1. Launch a cloud instance on a Ubuntu image.
Step 2. Convert the Ubuntu image into a Selenium server by running the configuration commands above, or running the installer in se34euca.
Step 3. Git clone se34euca.
Step 4. Run the test case of your choice.
Step 5. Terminate the instance when the test is finished.

Of course, you can easily automate the step 2, 3, and 4 to wrap the entire process into a single scripted operation. Then, with a help of a cloud infrastructure, such as AWS or Eucalyptus, simulating 150 user can be as simple as launching 150 instances to run the script on each instance by feeding the parameter as user-data.

Code Reference

For those who are interested in creating a framework for testing your own web application, please feel free to check out the Eucalyptus user console test framework se34euca at:

https://github.com/eucalyptus/se34euca

for a reference, and leave a comment if you have any questions or suggestions.

Simulate 150 Cloud User Activities Using Open Source Tools

For the 3.2 release in this December, Eucalyptus is coming out with an intuitive, easy-to-use cloud user console, which aims to support the on-premise dev/test cloud adoption among IT organizations and enterprises.


This easy-to-use Eucalyptus User Console is consisted of two main components: a browser-side javascript application, written in JQuery, and a proxy server that utilizes Python Boto to relay requests to Eucalyptus Cloud, which is written in Python Tornado, an open source version of the scalable, non-blocking web server developed by Facebook.

The target scale for the initial version of the user console is set to handle 150 simultaneous user activities under a single user console proxy.

Now, the challenge is how to simulate these 150 users to ensure that the user consoles and the proxy are able to withstand the workload of 150 active cloud users; more importantly how to ensure that such workload is not jeopardizing the user experience on the console.

One obviously answer is to find 150 people, train them thoroughly, and ask them to participate in the load testing. After all, 150 is doable.

However, what’s not doable is that having those 150 people to repeat the process over and over during the entire life cycle of the development until the release.

Then, the most “realistic” answer is to simulate those 150 people using machines. It turns out that the machines are really good at repeating the same things over and over, and they tend to behave in a very predictable manner when tuned properly.

At Eucalyptus, we use Selenium, open source web testing automation tools, to simulate the actual user interactions on the user console.

The steps are first, use Selenium IDE on Firefox to write an automation script that completes a single path of cloud user workflow — for instance, one simple user workflow is to log into the console, create a new keypair, and log out, and another workflow to log in, create a new volume, and log out. Second, repeat the first step above for all possible use cases to ensure that all, or most, of the functionality on the console are covered, allowing all use cases to be automatically executable via Selenium IDE. Third, export those automated IDE scripts to Selenium Python WebDriver format, which allows the automated scripts to run on a remote server without needing to actually opening up a browser. Finally, create a wrapper for each exported script so that each test case can be execute as a command-line tool on Linux.

The link below contains the collection of automated Selenium WebDriver test scripts, command-line tools, and their installer for testing the Eucalyptus User Console:

Se34Euca (Selenium34Eucalyptus) – https://github.com/eucalyptus/se34euca

With Se34Euca, you can instantaneously convert any machine — or virtual machine if you are already a cloud geek 😉 — into a Eucalyptus cloud user simulator.

The steps are, on a Ubuntu image, run the commands below to install and setup Se34Euca:

sudo apt-get -y install git-core

git clone git://github.com/eucalyptus/se34euca.git

cd ./se34euca/script/

./installer_se34euca.py

Then, running the actual test can be as simple as:

export DISPLAY=:0

./runtest_view_page.py -i 192.168.51.6 -p 8888 -a ui-test-acct-00 -u user00 -w mypassword1 -t view_all_page_in_loop

The command line above will simulate a cloud user clicking through every single landing page on the user console within 2 second, then taking a rest for 5 seconds, and repeating the frantic, yet controlled clicking again and again and again.

However, funny enough, it turned out that the automated script’s ability to click through all pages on the user console within 2 seconds was well beyond the capability of a human user. The graph below renders the normal behavior of an actual human user. The X-axis in the graph shows the total length of TCP packets seen in a second on the user console proxy server machine via tcpdump.  Notice the peak in the beginning as the user logging in, and a group of little ripples that mark the user clicking buttons or viewing different pages in a 7-minute period:

And, the graph below shows the difference in the actual user behavior and the automated script behavior simulated by a single instance of Se34Euca. Notice the super-human strength of the automated script — the first half of the graph below is showing the same 7-minute period shown in the graph above. According to the graph below, the automated script is able to generate 10 times workload than a human user.

This discovery turns out to be good news; the fact that one Se34Euca instance can generate 10x human user workload, all I need to do is to launch 15 instances of Se34Euca to simulate 150 users. So, I provisioned 3 Ubuntu machines and launched 5 instances of Se34Euca on each machine:

The first fifth of the graph above covers the same period as the second graph above. What you are looking at is 15 instances of Se34Euca clicking through every single page on the Eucalyptus User Console for about two hours, starting at 21:00 mark.

When computed for averaging the packet length per second over 60 second observation period, the graph looks below:

The graph above is showing that when 150 users simultaneously logging in to the user consoles, the average packet transmission throughput rate seen on the wire is about 750Kb per second. Assuming that the user console proxy server is hooked on 1 Gig link, the throughput of 750Kb per second is certainly “doable” by all means. 😉

Then, how do we ensure the user experience of the console?

Simple. While the user console proxy server is being slammed by 150 click-monkeys, I’m opening up my own browser to verify that my interaction with the console is smooth as usual. 🙂

On my next blog, I will cover more details on the exact setup of the Eucalyptus User Console load-testing, including the selenium scripts and monitoring setup, and dig deeper into the analysis of the data. Please, stay tuned 😉

Meanwhile, feel free to check out the blog below if you would like to preview the Eucalyptus User Console for yourself:

http://coderslike.us/2012/11/11/installing-the-eucalyptus-console-from-source-and-packages/

10 Steps to Euca Monkey

Euca Monkey is an easy-to-deploy test tool designed for performing stress-test on Eucalyptus Cloud. The tool repeatedly generates and tears down 6 types of cloud user resources: running instances, volumes, snapshots, security groups, keypairs, and IP addresses. As the resources are being populated and released, the cloud is actively queried to validate such resources are indeed being allocated correctly per request. Then, the tool renders the progress of the stress testing using Gnuplot, an open source graphing tool, and displays the graphs as webservice in real time.

Euca Monkey uses cloud-resource-populator — which utilizes Eutester, which is based on Boto, thus making the tool AWS-API-compatible  — to populate and release resources from Eucalyptus as a user. The input for cloud-resource-populator looks below:

[USER INFO]
account: ui-test-acct-23
user: user-23
password: mypassword23
[RESOURCES]
running instances: 2
volumes:  2
snapshots: 1
security groups: 10
keypairs: 3
ip addresses: 2
[ITERATIONS]
iterations: 200

With the given input above, cloud-resource-populator will generate the resources as specified in the [RESOURCES] section as the user ‘user-23′ under the account ‘ui-test-acct-23′. When viewed from Eucalyptus user console, it will look as below:

As soon as the resources are populated according to the specification, cloud-resource-populator will immediately send requests to the cloud to release all the allocated resources, which makes the console view look as below:

And, as you would have guessed, the process of populating and releasing of the resources is repeated for [ITERATIONS] times.

The most appealing feature of Euca Monkey is that it launches a webservice to render the progress of the stress-testing in real time.

The graph above is showing the input values of a few iterations of the resource population and tear-down process. When tracing the running instance line, which is in red, this graph is telling us that 20 instance were started on the first request, then those 20 instances were all terminated on the second mark, thus brought the count down to 0. And, 20 instances were started again, then terminated, and so on. Such operations were repeated 7 times on the graph above. And, also notice that there are 4 other resources being populated and released 7 times as well.

When the cloud is behaving nicely, the actual resources should be populated and released in match with the input values in the graph above, thus resulting in the output graph as below:

While the first graph shown above renders the input values to the tool cloud-resource-populator, this graph is showing the actual values reported by the cloud. The fact that these two graphs look the same means “Yay Cloud!!”

However, occasionally, during the development of Eucalyptus, you would see the graph like below on a rainy day in Santa Barbara:

The graph above reveals an interesting state of the cloud. Notice that the “running instance” line went from the “nice” behavior pattern to a flat line. It means that the cloud was able to launch and terminate 20 instances during the first phase, but somehow it got stuck to the state where it was not able to release instances, nor launch more instances, thus stuck with 18 running instances. But, notice that other resources were making the usual progress as before, except the security groups. It turned out what we were witnessing was the deadlock case in Eucalyptus Cluster Controller, which occurred around the 8th hour of the stress-testing. R.I.P CC. 😦

The purpose of stress-testing is to push the limit of the system to the point where malfunctions and faulty behaviors of the system can be observed. Such stress testing is crucial for the development of a distributed system like Eucalyptus; many unknowns and bugs are constantly introduced to the system as new features from various components are being integrated. Thus, having an easily deployable stress-testing tool with visualization support, such as Euca Monkey, yields tremendous benefits for the developers in an agile development environment since the tool aims to ensure the system’s stability and reliability throughout the rapid development cycle.

If you would like to take Euca Monkey for a spin, feel free to check out the GitHub link below. On a fresh Centos 6 machine or VM, it will take 10 simple steps to launch your own monkey.

https://github.com/eucalyptus/euca-monkey

Other Resources:

Eutester – https://github.com/eucalyptus/eutester

Boto – https://github.com/boto/boto

Eucalyptus QA – https://github.com/eucalyptus-qa

Open QA for Eucalyptus – https://github.com/eucalyptus/open-qa

OPEN QA for EUCALYPTUS

Open Source Software Project… What is it good for?

It’s free! It’s for the community! It’s the future of software! It’s better than closed! Everyone is doing it! It drives innovation! It’s the way of Steve Jobs (huh?)… And, it’s free!

I mean, seriously.

All I hear is nothing but the presuppositions on the greatness of open source — how it will greatly benefit you and your great organization.

But, can someone tell me how an open source project will benefit itself for being out in the open?

Yes, open source is free. Yes, you can download it and use it without paying a dime, which is fantastic by anyone’s standard.

Then, you will soon realize, “Wait, that sounds just too good to be true. There must be something they get in return for giving it away for free.”

Yes, it is absolutely true, and it comes down to this one crucial benefit:

QUALITY.

It turns out, among those freeloaders, there exist these rare kinds who defy intuition and want to give it back for some reason — there are those who want to share their abilities to fix things, the abilities to break things, the abilities to point out flaws, the abilities to compliment beauty, the abilities to talk smack, and the abilities to appreciate what others have done.

When the project is out in the open, feedback from these enthusiasts comes in various forms, in very chaotic ways, which is overwhelming at first. However, every single interaction with these folks contributes to one significant attribute of the open source project — it pushes the quality of the software.

Eucalyptus has been striving to achieve an incomparable goal, that is to be the most tested, thus the most stable, in turn, the most dependable cloud infrastructure in the open. It has been a long and tough road for us to march on through the ups-and-downs of tech-industry turmoil. However, the commitment toward this goal has been unwavering.

In part of the effort to become the most dependable open source cloud infrastructure, Eucalyptus is welcoming all community members to participate in the quality assurance process of Eucalyptus development.

OPEN QA for Eucalyptus Wiki Page:

https://github.com/eucalyptus/open-qa/wiki

OPEN QA Website (Beta Version):

http://ec2-50-112-61-121.us-west-2.compute.amazonaws.com/open_qa.php

Every community member is invited to check out the Open QA website on a daily basis and provide feedback and criticism on the development process of Eucalyptus.

You will find Eucalyptus developers engaging in active conversations on the IRC channels: #eucalyptus, #eucalyptus-devel, and #eucalyptus-qa on irc.freenode.net. Or, you may also prefer posting your thoughts on a forum via: https://engage.eucalyptus.com Or, please feel free to directly file bugs on Eucalyptus via JIRA at: https://eucalyptus.atlassian.net/browse/EUCA

But, in whichever way you decide to engage with us, please never hesitate to:

Live long and prosper \\//

— Open QA for Eucalyptus —

%d bloggers like this: