Thursday, December 4, 2014

Docker

docker

As my project started up I was looking for a good way to create my application and run it locally and on servers. I was looking into different technologies like Chef, Puppet and Salt but when I went to start up my Digital Ocean server it had another option I hadn’t heard much about. I could create an Image with Docker. Since I didn’t know what this was I decided to look into it.

What is Docker

The definition from their website is

Docker is an open platform for developers and sysadmins to build, ship, and run distributed applications. Consisting of Docker Engine, a portable, lightweight runtime and packaging tool, and Docker Hub, a cloud service for sharing applications and automating workflows, Docker enables apps to be quickly assembled from components and eliminates the friction between development, QA, and production environments. As a result, IT can ship faster and run the same app, unchanged, on laptops, data center VMs, and any cloud.

allows me to create a simple container that can run my application. I wouldn’t run all my components on it (DB and App server) but I would run my app server and code in one container and my DB in another. This sounded great to me. I have been a big fan of a single file for deploys for a long time, this was a step further. I could take my application package it up and then ship it as a whole container through my environments. The only thing that would have to change from environment to environment was the other containers it was wired to. I also could easily have different versions of java on my machine to run the application, different versions of tomcat, or of Postgresql. I could also create a container definition for any of these items, store them on Docker Hub and anyone on the team or even the build server, could pull them down and run the exact same version.

Creating a Container

One of the places that docker helps was with setting up a local DB. It allowed all of us on the team to have the same DB set up the same way. Here are the steps I went through to set up the container. For installing Docker here are some good links How to Use Docker on OS X: The Missing Guide and the main guide Installing Docker on Mac OS X Once you have docker installed you can start to create and run your container.

Create Dockerfile

#
# Dockerfile for Postgresql
#

FROM ubuntu
MAINTAINER Joseph Muraski

# Add the PostgreSQL PGP key to verify their Debian packages.
# It should be the same key as https://www.postgresql.org/media/keys/ACCC4CF8.asc
RUN apt-key adv --keyserver keyserver.ubuntu.com --recv-keys B97B0AFCAA1A47F044F244A07FCC7D46ACCC4CF8

# Add PostgreSQL's repository. It contains the most recent stable release
#   of PostgreSQL, ``9.3``.
RUN echo "deb http://apt.postgresql.org/pub/repos/apt/ precise-pgdg main" > /etc/apt/sources.list.d/pgdg.list

# Update the Ubuntu and PostgreSQL repository indexes
RUN apt-get update

# Install ``python-software-properties``, ``software-properties-common`` and PostgreSQL 9.3
# There are some warnings (in red) that show up during the build. You can hide
#  them by prefixing each apt-get statement with DEBIAN_FRONTEND=noninteractive
RUN apt-get -y -q install python-software-properties software-properties-common
RUN apt-get -y -q install postgresql-9.3 postgresql-client-9.3 postgresql-contrib-9.3

# Note: The official Debian and Ubuntu images automatically ``apt-get clean``
# after each ``apt-get``

# Run the rest of the commands as the ``postgres`` user created by the ``postgres-9.3`` package when it was ``apt-get installed``
USER postgres

# Create a PostgreSQL role named ``docker`` with ``docker_pass`` as the password and
# then create a database `docker_local_db` owned by the ``docker`` role.
# Note: here we use ``&&\`` to run commands one after the other - the ``\``
#     allows the RUN command to span multiple lines.
RUN   /etc/init.d/postgresql start &&\
psql --command "CREATE USER docker WITH SUPERUSER PASSWORD 'docker_pass';" &&\
createdb -O docker docker_local_db

# Adjust PostgreSQL configuration so that remote connections to the
# database are possible.
RUN echo "host all  all    0.0.0.0/0  md5" >> /etc/postgresql/9.3/main/pg_hba.conf

# And add ``listen_addresses`` to ``/etc/postgresql/9.3/main/postgresql.conf``
RUN echo "listen_addresses='*'" >> /etc/postgresql/9.3/main/postgresql.conf

# Expose the PostgreSQL port
EXPOSE 5432

# Add VOLUMEs to allow backup of config, logs and databases
VOLUME  ["/etc/postgresql", "/var/log/postgresql", "/var/lib/postgresql"]

# Set the default command to run when starting the container
CMD ["/usr/lib/postgresql/9.3/bin/postgres", "-D", "/var/lib/postgresql/9.3/main", "-c", "config_file=/etc/postgresql/9.3/main/postgresql.conf"]

Dockerfile does a few things, it installs postgresql, exposes port 5432 outside of the container and maps some shared volumes. Lets go over some of the commands that are used and what they are used for.
- FROM - This sets the base image for the file. It can be a bare linux install like ubuntu or a fully crafted image that someone has shared in a docker repository.
- MAINTAINER - This is a field to say who created this docker file
- RUN - is the way you tell docker to RUN a command agianst the container. This can be any unix command such as apt-get install, curl, wget, or chmod
- USER - Sets the user to run subsequent commands as
- EXPOSE - Has docker expose these ports
- CMD - Provides the default for executing the container

Commands that are not used in this Dockerfile but that I have used frequently are
- ADD - Adds files to the docker file system, these can be from a url, tar or zip file
- COPY - similar to add but only copies the files over, they have to be files, not urls and will not unzip or untar a file
- ENV - Allows you to set environment variables inside the container. This command can be used to alter PATH also.

You can get a complete list of Dockerfile commands from the documentation the Digital Ocean article Docker Explained: Using Dockerfiles to Automate Building of Images the Best practices for writing Dockerfiles from the Docker documentation and this good guide Guidance for Docker Image Authors. There are also numerous other articles and tips on how to write a good Dockerfile.

Create Build Script

Here is a bash script that is used to create the image.

#!/bin/sh
docker build -t db_server/postgresql .

very simple file in the same location as the Dockerfile will create the image with the name db_server/postgresql. For more options on the docker build command you can read the command line reference.

Map the Ports

Since boot2docker runs in Virtual Box you will need to map the ports out so that you can connect to them locally. This script maps port 49155 from the Virtual Box instance to 5432 on the host machine.

#!/bin/sh

VBoxManage modifyvm "boot2docker-vm" --natpf1 "tcp-port-dockerDb49155,tcp,,5432,,49155";
VBoxManage modifyvm "boot2docker-vm" --natpf1 "udp-port-dockerDb49155,udp,,5432,,49155";

Create Container

As of Docker 1.3 you can use docker create to create the container without running it. Before this you would have to run the container one way the first time and then use docker start on subsequent starts. Now you can use docker create then just start the container every time.

#!/bin/sh

docker create -d -p 49155:5432 -P --name dockerServerDb db_server/postgresql

Start Docker

#!/bin/sh

echo "Starting docker.....\n"
boot2docker up

echo "\nStarting dockerServerDb docker instance\nType following command to connnect to db"
echo "psql -h localhost -p 5432 -d docker_local_db -U docker --password\n\n"

docker start dockerServerDb

script makes sure that boot2docker is running. Then it starts the container. It is also nice enought to remind you how to connect to the db if you want to have and psql installed locally.

Conclusion

While setting up Postgresql is not a simple case it is a good example of how this could help you locally and also shows all the steps you would need to do. These files could all be checked into your source control and every developer on the team would now have the same db running the same way. You could easily have different versions of postgresql running for different projects or you could add other services like Redis if needed. I find docker to be a great tool and this is only the beggining of what it can do for you.

Monday, May 5, 2014

Speeding up the Feedback loop on Grails Unit Tests

grailsFeedbackLoop

I am doing Test Driven Development on a Groovy/Grails project. I use IntelliJ for my IDE and there was a time when working this way and running tests in IntelliJ would be fine, but I got spoiled. For the last year I have been working with AngularJS and using Karma to run my Jasmine JavaScript unit tests. These tests ran fast and continuously. Now every time I run my unit tests for Grails in IntelliJ I am waiting and can feel how the delay impacts my flow. By using grails interactive TMUX and Gaurd, I was able to produce the same results.

Step one - Grails Interactive

Running the tests in IntelliJ or by typing in

grails test-app unit:

just to slow. It starts up the container every time which took at least 15 seconds. This doesn’t seem like a lot, but it adds up over the course of the day. By using grails in interactive mode the start up time is removed and the tests complete faster. Once in interactive mode the tests command can be repeated by pressing the up arrow. This was the first step, but chaning to the terminal window to hit up arrow after every change was taking to much time also. Now to make them run automatically when I save a file so I don’t have to change windows.

Step Two - Watch Mode

To accomplish watch mode I had to use a few tools. The first thing is to watch the files and to execute a command when something changes. To do this I used the Ruby tool Guard. I set up a guard file with the following

guard :shell do
  watch /.groovy/ do |m|
   n m[0], 'Changed'
        `tmux send-keys -t :1.1 'test-app unit: ' C-m`
  end
end

file watches all groovy files under the directory and then executes

tmux send-keys -t :1.1 'test-app unit: ' C-m

tmux send-keys command acts like I typed the command myself in the session making it execute the test-app every time a file is saved.
tmux and tmuxinator

In order for the above command to work, I need to have a tmux session set up. I use tmux with tmuxinator to create the session. My tmuxinator file for create my session looks like this

# ~/.tmuxinator/server.yml
name: server_dev
root: ~/Documents/workspace/server 
windows:
  - testing:
      layout: main-horizontal
      panes:
        - grails:
          - grails
        - guard -G ../guardFile
        - #empty, just shell

layout gives me three panes. Pane one the grails interactive pane, pane two launches Guard, pane three is an shell prompt in case anything needs to be run.

One note about the Gurad file and the tmuxinator script, I changed my tmux settings to make the windows and panes have 1 based lists instead of 0 based lists. If you do not modify this setting you will need to modify the Guardfile for this.

This set up works great, although, when I am working on one test, it would be nice if only that test was run, and not the whole suite. To do this I modified the guardfile to this

guard :shell do
  watch /.groovy/ do |m|
   n m[0], 'Changed'
    if(! ENV['GRAILS_TEST'] or ENV['GRAILS_TEST'].nil?)
  `tmux send-keys -t :1.1 'test-app unit:' C-m`
 else
  `tmux send-keys -t :1.1 'test-app unit: #{ENV["GRAILS_TEST"]}' C-m`
 end
  end
end

change here is to look for an environment variable in the guard session, if it is set, it will add that test to the send keys command and only that test will run. Setting the environment variable in the guard prompt is a bit verbose, fortunately guard runs using Pry, that allows me to create commands. I created two commands

  • setTest - takes a test name and will set it to the env variable
  • clearTest - clears the variable and will make all the tests run again

To create the pry commands for guard you need to modify the .guardrc file in your home directory. I added the following

Pry::Commands.block_command "setTest", "Set a specific test to run from guard" do |x|
  ENV['GRAILS_TEST'] = x
  output.puts "Guard will only run #{x} now"
end
Pry::Commands.block_command "clearTest", "Clears out specific test so all are run" do
  ENV['GRAILS_TEST'] = ''
  output.puts "Guard will run all unit tests now"
end 

Mission Complete

Now I have my grails tests running every time I save a file. I can specify which tests to run and they execute quickly.