Kyo Lee

Open-Source Cloud Blog

Category: DevOps

Docker Strange Dev: How I learned to stop worrying and love the VM

A couple of months ago, my laptop died. I remember the day I brought it home for the first time. Still in college. I made a big decision—despite of my marginal finances—to go with the top model. “It should last at least five years,” I convinced myself. Seven years later, I got the news from the certified Apple repair person, “Well, you should just buy a new laptop than trying to fix this one.” So I did.

Now, I have a new laptop, which is currently placed on a pedestal. Watching how fast applications open and close on this laptop makes me cry. The future has arrived. I tell myself, “No! I will not f*** this one up this time! I will never install any software on this laptop ever!” Of course, I am in denial. This pure awesomeness will eventually decay. I predict the rate of decay exponentially accelerated by the number of software installed on the laptop (Disclaimer: No supporting data exists for this assertion—other than my paranoia).

I begin searching for the answer to the quest: Install no software. So I try Docker.

Docker_logo

The noticeable difference between using Docker, which runs Linux container, and using the virtual machine (VM) is speed.

Both options provide the ability to isolate software’s running environment from the base operating system. Thus both deliver the desirable paradigm: Build once and run everywhere. And both address my concern—that the base operation system must remain minimal and untouched. However, Docker stands out from the classic VM approach by allowing the cloud application—which is a virtual image with desired software installed—to run within milliseconds. Compare that to the minutes of time it takes to boot a VM instance.

Speed_Racer

What significance does this difference make?

Docker removes a big chunk of mental roadblock for developers. Thanks to Docker’s superior response time, developers can barely identify the perceptual distinction between running applications on a virtual image (a virtual container, to be precise) and running applications on the base OS. In addition to its responsiveness, Docker appeals to the developers by obsoleting the tedious procedures of booting the VM instance and managing the instance’s life cycle. Unlike previous VM-centric approach, Docker embraces the application-centric design principal. Docker’s command line interface (CLI) below asks us two simple questions:

1. Which image to use?

2. What command to run?

docker run [options] <image> <command>

With Docker, I can type single-line commands to run cloud applications, similar to other Linux commands. Then I will have my databases servers running. I will have my web servers running. I will have my API servers running. And I will have my file servers running. All these are done by using single-line commands. The entire web-stack can now be running on my laptop within seconds. The best part of all this? When I’m done, there will be no trace left on my laptop—as if they never existed. The pure awesomeness prevails.

docker-whaleeuca_new_logo

Running Eucalyptus Console on Docker

 

For those who want to check out Eucalyptus Console to access Amazon Web Services, here are the steps to launch Eucalyptus Console using Docker on OS X.

Step 1. Install Docker on your laptop

Here is a great link that walks you through how to install Docker on OS X:

http://docs.docker.com/installation/mac/

Step 2. Pull Eucalyptus Console Docker image repository

Run the command below to pull Eucalyptus Console Docker images (it will take some time to download about 1.5 G image files):

docker pull kyolee310/eucaconsole

Run the command below to verify that the eucaconsole images have been pulled:

docker images

Screen Shot 2014-09-14 at 10.52.11 PMFor those who want to build the images from scratch, here is the link to the Dockerfile used:

https://github.com/eucalyptus/dockereuca

Step 3.  Update Docker VM’s clock

When running Docker on OS X, make sure that OS X’s clock is synchronized properly. A skewed clock can cause problems for some applications on Docker. In order to fix this issue, you will need to log into the Docker VM and synchronize the clock manually.

You can SSH into Docker VM using the command below:

boot2docker ssh

Once logged in, run the command below to sync the clock:

sudo ntpclient -s -h pool.ntp.org

Run the command below to verify that the clock has been sync’ed

date

One more patching work to do is to create an empty “/etc/localtime” file so that you can link your OS X’s localtime file to Docker VM’s localtime file at runtime:

sudo touch /etc/localtime

Exit the SSH session:

exit

This issue is being tracked here:

https://github.com/boot2docker/boot2docker/issues/476

Screen Shot 2014-09-14 at 10.53.04 PM

Step 4. Launch Eucalyptus Console via Docker

Run the command below to launch Eucalyptus Console on Docker:

docker run -i -t -v /etc/localtime:/etc/localtime:ro -p 8888:8888 kyolee310/eucaconsole:package-4.0 bash

It’s a shame, but running a live “bash” session is not Docker’s way of doing things, but excuse me for the moment until I figure out how to run Eucalyptus Console properly without using the “service” command.

The command above will open a bash shell session for the eucaconsole image, then run the command below to launch Eucalyptus Console:

service eucaconsole start

Step 5.  Open Eucalyptus Console on a browser

Run the command below to find out the IP of Docker VM:

boot2docker ip

, whose output would look like:

    The VM’s Host only interface IP address is: 192.168.59.103

Using the IP above, access Eucalyptus Console at port 8888:

ex. http://192.168.59.103:8888/

Screen Shot 2014-09-14 at 8.54.09 PM

In order to access AWS, you will need to obtain your AWS access key and secret key. Here is the link by AWS on howto:

http://docs.aws.amazon.com/AWSSimpleQueueService/latest/SQSGettingStartedGuide/AWSCredentials.html

Step 6. Log into AWS using Eucalyptus Console

Screen Shot 2014-09-14 at 8.54.28 PM

A Developer’s Story on How Eucalyptus Saved the Day

This is a short story on how a UI developer at Eucalyptus was able to use Eucalyptus to save his time on development of Eucalyptus.

travis-mascot-200px

The agenda of the day is to set up Travis CI for the latest Eucalyptus user console, Koala. Quoted from Wikipedia, “Travis CI is a hosted, distributed continuous integration service used to build and test projects hosted at GitHub.” In other words, we want to set up an automated service hook on Koala’s GitHub repository so that whenever developers commit new code, “auto-magic” takes places somewhere in the Internet, which ensures that the developers did not screw things up by mistake, which, in turn, allows us developers to sit back and enjoy a warm cup of “post-commit-victory” tea while our thoughts are drifting away on the sea of Reddit.

But, before arriving at such Utopia, first things first. Must read the instructions on Travis CI.

Luckily, Travis CI put together a nice and comforting set of documentations on how to hook a project on Travis CI (http://about.travis-ci.org/docs/user/getting-started/) as well as its impressive, pain-free registration interface. Things are as easy as clicking buttons for the first few steps so far.

And, of course, nothing ever comes that easy. Now I am looking at the part where I need to create the XML configuration file for Koala’s build procedures. Done with the button-clicking. Time to put down the cup of tea because now I got some reading and thinking to do.

A few minutes after, I am stunned by the line below:

“Travis CI virtual machines are based on Ubuntu 12.04 LTS Server Edition 64 bit.”

Oh, bummer.

The main development platform for Koala has been on CentOS 6, meaning Koala’s build dependencies and scripts have been targeted toward running on CentOS 6 environment. It means that I need to go over the build procedures and dependency setting so that Koala can be built and tested on Ubuntu 12.04. But, first, where do I find those Ubuntu machines?

Then, the realization, ‘Wait a second here. I have Eucalyptus.’

I open up a browser and log into the Eucalyptus system for which I have been using as backend to develop Koala. I launch a couple of Ubuntu 12.04 instances. Within a minute, I have 2 fresh instances of Ubuntu 12.04 virtual machines up and running.

Immediately I log in to the first instance and start installing Koala to validate the build procedures on Ubuntu 12.04 environment. Along the way, I discover various little issues in this new environment and tweak things around to fine-tune Koala’s build procedures. Once felt ready, I log into the other Ubuntu instance to verify the newly adjusted build procedures under its fresh setting. More mistakes and issues are captured, and more adjustments are made. Meanwhile, the first instance has been shut down and a new Ubuntu instance has been brought up. With this new instance, I am able to rinse and repeat the validation of the build procedures. Of course, there are some mistakes again. They get fixed and adjusted. Meanwhile, another instance goes down and comes up fresh.

The juggling of the instances lasts a couple more times until the build procedures are perfected. Now I am confident that Koala will build successfully on Ubuntu 12.04 environment. Commit the new build XML script to GitHub. It’s time for the warm cup of “post-commit-victory” tea.

images-tea

DevOps Culture — Fail Fast on Eucalyptus

sandiegozoo
At a meetup event down in San Diego, California, Eucalyptus had a chance to meet Sander van Zoest (@svanzoest), the VP of technology at OneHealth (http://www.onehealth.com/), who is also the organizer of the San Diego DevOps group (http://www.meetup.com/sddevops/). Sander and his team at OneHealth have been using Eucalyptus cloud for some time. Asked why OneHealth runs Eucalyptus in-house, Sander had some interesting stories to say about dealing with health-related data and the company’s DevOps engineering culture.

onehealth

Due to the strict regulations on Protected Health Information (PHI), OneHealth needs to take extra strong measures if they are to provide the services on AWS; Sander spent a good amount of time explaining to us how demanding it is to satisfy the regulations. Such barriers make things complicated to push any personal identifiable health information to the cloud.

For the AWS case, the very specific barrier was that AWS provides no legal protection when storing sensitive data in the cloud storage space. For instance, it is required by HIPAA and HITECH regulations that One Health needs to be able to promise a 72-hour response time to inform their customers about the breach of the data, should it ever happen, and provide an ETA to identify and patch the security hole that caused the breach.

Sander points out that at the moment, AWS does not guarantee such protections/services. For this reason, OneHealth’s production environment is deployed at Rackspace’s co-location since it provides HIPAA Business Associate Addendums. However, it is noted that given the evolving nature of the public cloud, it is very “cloudy” to predict how things are going to change in the near future. The recent announcement by AWS on CloudHSM (http://aws.typepad.com/aws/2013/03/aws-cloud-hsm-secure-key-storage-and-cryptographic-operations.html) — although it doesn’t cover the legal protection — is a good indicator showing AWS’s interest in providing secure storage service as moving forward.

cloudy2

What this uncertain, “cloudy” future means for engineering at OneHealth is employing a variety of infrastructure environments to take advantage of each platform while staying flexible. It becomes essential to design OneHealth’s services and applications to be deployable on bare-metal systems at Rackspace (production environment), AWS (sandbox/staging environment), Eucalyptus (in-house continuous integration and testing environment), and engineers’ laptops using Vagrant (development and testing environment).  (http://www.vagrantup.com/).

Under such heterogeneous systems, from its production down to the engineer’s laptop, the development environment — the OS, dependencies, configurations, etc — needs to be kept uniform via virtualization and automation, allowing seamless pushing of new code from the laptop up to the production. For handling the life cycle of machines and VM instances, the engineers at OneHealth are big fans of Chef (http://www.opscode.com/chef/), which makes the configuration management portable on any infrastructure platforms. For virtual machines, the instance images are prepared via debian preseed files while leveraging a open source tool VeeWee (https://github.com/jedi4ever/veewee).

chefchef_icon

At OneHealth, the philosophy of DevOps is deeply embedded in every aspect of its development and operation. The concept of DevOps was not new to many engineers who brought in the ideas of “Infrastructure as Code” and “Commit Often and Fail Fast” from previous companies such as MP3.com and Joost.

Speaking of DevOps culture, one fun fact Sander mentioned — which goes against intuition for many traditional IT shops — was that the operation team at OneHealth likes to take down the instances and rebuild them regularly. The recycling of the instances ensures the “freshness” of the deployed services and applications. The operation engineers should be more concerned if an instance’s uptime was longer than, say, 30 days because it meant that the content of the instance was outdated, possibly containing unfixed bugs or security issues. If the deployment setup was doing what it was supposed to be doing, then it should have killed the outdated instance and brought up a new instance with the latest updates.

The same goes for the development environment. It would be much better to refresh the dev environment instances with frequent relaunching and reconstructing than having the developers working on a stale dev environment, which turns out to be more harmful for the development. Plus, this destroy-and-rebuild enforcement encourages the developers to consistently check in the code to a version-controlled code repository, allowing early detection of conflicts in code.

All of these procedures, bringing together datacenter automation and configuration management, are part of a very new movement in software development now labeled as “DevOps”. The DevOps folks often joke around and say even a few years ago, the terminology didn’t even exist, but now, DevOps has become the most sought-after practice in IT. All thanks to the wide spread of cloud computing, giving birth to the programmable infrastructure.

euca_new_logo

%d bloggers like this: